Microsoft Word Tezisler Tercume 2009 2010 2011. doc



Yüklə 3,6 Kb.
Pdf görüntüsü
səhifə149/154
tarix14.12.2017
ölçüsü3,6 Kb.
#15683
1   ...   146   147   148   149   150   151   152   153   154

«TƏRCÜMƏŞÜNASLIQ VƏ ONUN MÜASİR DÖVRDƏ ROLU»   IV Respublika tələbə elmi-praktik konfransı 
 
 
338 
Translation becomes one of the parts of the `refraction' "... the rather long term 
strategy, of which translation is only a part, and which has as its aim the manipulation 
of foreign work in the service of certain aims that are felt worthy of pursuit in the 
native culture..." (1988:204). This is indeed a powerful theory to study translation as 
it places as much significance to it as criticism and interpretation. Lefevere goes on to 
give some impressive analytical tools and perspectives for studying literary translation. 
`The ideological and poetological constraints under which translations are pro-
duced should be explicated, and the strategy devised by the translator to deal with 
those constraints should be described: does he or she make a translation in a more 
descriptive or in a more refractive way? What are the intentions with which he or 
she introduces foreign elements into the native system? Equivalence, fidelity, freedom 
and the like will then be seen more as functions of a strategy adopted under certain 
constraints, rather than absolute requirements, or norms that should or should not be 
imposed or respected. It will be seen that `great 'ages of translation occur whenever 
a given literature recognizes another as more prestigious and tries to emulate it. Lite-
ratures will be seen to have less need of translation (s) when they are convinced of 
their own superiority. It will also be seen that translations are often used (think of the 
Imagists) by adherents of an alternative poetics to challenge the dominant poetics of a 
certain period in a certain system, especially when that alternative poetics cannot use 
the work of its own adherents to do so, because that work is not yet written' (1984:98-99). 
Another major theorist working on similar lines as that of Lefevere is Gideon 
Toury (1985). His approach is what he calls Descriptive Translation Studies (DTS). 
He emphasizes the fact that translations are facts of one system only: the target system 
and it is the target or recipient culture or a certain section of it, which serves as the 
initiator of the decision to translate and consequently translators operate first and 
foremost in the interest of the culture into which they are translating. Toury very 
systematically charts out a step by step guide to the study of translation. He stresses 
that the study should begin with the empirically observed data, that is, the translated 
texts and proceeds from there towards the reconstruction of non-observational facts 
rather than the other way round as is usually done in the `corpus' based and traditional 
approaches to translation. The most interesting thing about Toury's approach (1984) 
is that it takes into consideration things like `pseudo-translation' or the texts foisted 
off as translated but in fact are not so. In the very beginning when the problem of 
distinguishing a translated text from a non-translated text arises, Toury assumes 
that for his procedure `translation' will be taken to be `any target-language utterance 
which is presented or regarded as such within the target culture, on whatever grounds'. 
In this approach pseudotranslations are `just as legitimate objects for study within 
DTS as genuine translations. They may prove to be highly instructive for the estab-
lishment of the general notion of translation as shared by the members of a certain 
target language community'. 
 


Materiallar 
                                                                                                                             07 may 2011-ci il
 
 
339 
HISTORY OF MACHINE TRANSLATION 
Kənan NURİ 
Translation 3 
 
The history of machine translation generally starts in the 1950s, although work 
can be found from earlier periods. The Georgetown experiment in 1954 involved 
fully automatic translation of more than sixty Russian sentences into English. The 
experiment was a great success and ushered in an era of significant funding for 
machine translation research in the United States. The authors claimed that within 
three or five years, machine translation would be a solved problem. In the Soviet 
Union, similar experiments were performed shortly after. 
However, the real progress was much slower, and after the ALPAC report in 1966, 
which found that the ten years of research had failed to fulfill the expectations, and 
funding was dramatically reduced. Starting in the late 1980s, as computational power 
increased and became less expensive, more interest began to be shown in statistical 
models for machine translation. 
Today there is still no system that provides the holy-grail of "fully automatic high 
quality translation of unrestricted text" (FAHQUT). However, there are many prog-
rams now available that are capable of providing useful output within strict constraints; 
several of them are available online, such as Google Translate and the SYSTRAN 
system which powers AltaVista's (Yahoo's since May 9, 2008) BabelFish. 
THE BEGINNING 
The history of machine translation dates back to the seventeenth century, when 
philosophers such as Leibniz and Descartes put forward proposals for codes which 
would relate words between languages. All of these proposals remained theoretical, 
and none resulted in the development of an actual machine. 
The first patents for "translating machines" were applied for in the mid 1930s. 
One proposal, by Georges Artsrouni was simply an automatic bilingual dictionary 
using paper tape. The other proposal, by Peter Troyanskii, a Russian, was more detailed. 
It included both the bilingual dictionary, and a method for dealing with grammati-
cal roles between languages, based on Esperanto. The system was split up into three 
stages: the first was for a native-speaking editor in the sources language to organize 
the words into their logical forms and syntactic functions; the second was for the 
machine to "translate" these forms into the target language; and the third was for a 
native-speaking editor in the target language to normalize this output. His scheme 
remained unknown until the late 1950s, by which time computers were well-known. 
THE EARLY YEARS 
The first proposals for machine translation using computers were put forward 
by Warren Weaver, a researcher at the Rockefeller Foundation, in his July, 1949 


Yüklə 3,6 Kb.

Dostları ilə paylaş:
1   ...   146   147   148   149   150   151   152   153   154




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə