1
votes

I am working on Twitter. I got data from Twitter with Stream API and the result of app is JSON file. I wrote tweets data in a text file and now I see Unicode characters instead of Turkish characters. I don't want to do find/replace in Notepad++ by hand. Is there any automatic option to replace characters by opening txt file, reading all data in file and changing Unicode characters with Turkish characters by Python?

Here are Unicode characters and Turkish characters which I want to replace.

  • ğ - \u011f
  • Ğ - \u011e
  • ı - \u0131
  • İ - \u0130
  • ö - \u00f6
  • Ö - \u00d6
  • ü - \u00fc
  • Ü - \u00dc
  • ş - \u015f
  • Ş - \u015e
  • ç - \u00e7
  • Ç - \u00c7

I tried two different type

#!/usr/bin/env python

# -*- coding: utf-8 -*- 

import re

dosya = open('veri.txt', 'r')

for line in dosya:
    match = re.search(line, "\u011f")
    if (match):
        replace("\u011f", "ğ")

dosya.close()

and:

#!/usr/bin/env python

# -*- coding: utf-8 -*- 

f1 = open('veri.txt', 'r')
f2 = open('veri2.txt', 'w')

for line in f1:
    f2.write=(line.replace('\u011f', 'ğ')) 
    f2.write=(line.replace('\u011e', 'Ğ'))
    f2.write=(line.replace('\u0131', 'ı'))
    f2.write=(line.replace('\u0130', 'İ'))
    f2.write=(line.replace('\u00f6', 'ö'))
    f2.write=(line.replace('\u00d6', 'Ö'))
    f2.write=(line.replace('\u00fc', 'ü'))
    f2.write=(line.replace('\u00dc', 'Ü'))
    f2.write=(line.replace('\u015f', 'ş'))
    f2.write=(line.replace('\u015e', 'Ş'))
    f2.write=(line.replace('\u00e7', 'ç'))
    f2.write=(line.replace('\u00c7', 'Ç'))

f1.close()
f2.close()

Both of these didn't work. How can I make it work?

2
How about you show the code you used to get data from twitter? It would be easier to update it to output data correctly in the first place.Anonymous
'\u00c7' is an unicode escape sequence and actually is the same as 'Ç'. Try running '\u00c7' == 'Ç' in the python interpreter. It will return True. More information here: docs.python.org/3/howto/…Manuel Jacob
Another problem is that f2.write=(line.replace('\u00c7', 'Ç')) does not do what you want. It replaces the write method by a string instead of calling the method (which would be f2.write(...)).Manuel Jacob
class StdOutListener(StreamListener): def on_data(self, data): print (data) return True def on_error(self, status): print (status) if name == 'main': l = StdOutListener() auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) stream = Stream(auth, l) stream.filter(languages=["tr"], track=["words in Turkish")S.SavaS
and i do: python3 twitter_streaming.py > data.txt to get data from twitterS.SavaS

2 Answers

5
votes

JSON allows both "escaped" and "unescaped" characters. The reason why the Twitter API returns only escaped characters is that it can use the ASCII encoding, which increases interoperability. For Turkish characters you need another encoding. Opening a file with the open function opens a file assuming your current locale encoding, which is probably what your editor expects. If you want the output file to have e.g. the ISO-8859-9 encoding, you can pass encoding='ISO-8859-9' as an additional parameter to the open function.

You can read a file containing a JSON object with the json.load function. This returns a Python object with the escaped characters decoded. Writing it again with json.dump and passing ensure_ascii=False as an argument writes the object back to a file without encoding Turkish characters as escape sequences. An example:

import json
inp = open('input.txt', 'r')
out = open('output.txt', 'w')
in_as_obj = json.load(inp)
json.dump(in_as_obj, out, ensure_ascii=False)

Your file isn't really a JSON file, but instead a file containing multiple JSON objects. If each JSON object is on its own line, you can try the following:

import json
inp = open('input.txt', 'r')
out = open('output.txt', 'w')
for line in inp:
    if not line.strip():
        out.write(line)
        continue
    in_as_obj = json.loads(line)
    json.dump(in_as_obj, out, ensure_ascii=False)
    out.write('\n')

But in your case it's probably better to write unescaped JSON to the file in the first place. Try replacing your on_data method by (untested):

def on_data(self, raw_data):
    data = json.loads(raw_data)
    print(json.dumps(data, ensure_ascii=False))
3
votes

You can use this method:

# For Turkish Character
translationTable = str.maketrans("ğĞıİöÖüÜşŞçÇ", "gGiIoOuUsScC")

yourText = "Pijamalı Hasta Yağız Şoföre Çabucak Güvendi"
yourText = yourText.translate(translationTable)

print(yourText)