Microsoft Word Tezisler Tercume 2009 2010 2011. doc



Yüklə 3,6 Kb.
Pdf görüntüsü
səhifə150/154
tarix14.12.2017
ölçüsü3,6 Kb.
#15683
1   ...   146   147   148   149   150   151   152   153   154

«TƏRCÜMƏŞÜNASLIQ VƏ ONUN MÜASİR DÖVRDƏ ROLU»   IV Respublika tələbə elmi-praktik konfransı 
 
 
340 
memorandum. These proposals were based on information theory, successes of code 
breaking during the Second World War and speculation about universal underlying 
principles of natural language. 
A few years after these proposals, research began in earnest at many universities 
in the United States. On 7 January 1954, the Georgetown-IBM experiment, the first 
public demonstration of an MT system, was held in New York at the head office of 
IBM. The demonstration was widely reported in the newspapers and received much 
public interest. The system itself, however, was no more than what today would be 
called a "toy" system, having just 250 words and translating just 49 carefully selected 
Russian sentences into English — mainly in the field of chemistry. Nevertheless it 
encouraged the view that machine translation was imminent — and in particular 
stimulated the financing of the research, not just in the US but worldwide. 
Early systems used large bilingual dictionaries and hand-coded rules for fixing 
the word order in the final output. This was eventually found to be too restrictive, and 
developments in linguistics at the time, for example generative linguistics and transfor-
mational grammar were proposed to improve the quality of translations. 
During this time, operational systems were installed. The United States Air Force 
used a system produced by IBM and Washington University, while the Atomic 
Energy Commission in the United States and EURATOM in Italy used a system 
developed at Georgetown University. While the quality of the output was poor, it 
nevertheless met many of the customers' needs, chiefly in terms of speed. 
At the end of the 1950s, an argument was put forward by Yehoshua Bar-Hillel, 
a researcher asked by the US government to look into machine translation against 
the possibility of "Fully Automatic High Quality Translation" by machines. The argu-
ment is one of semantic ambiguity or double-meaning. Consider the following sentence: 
Little John was looking for his toy box. Finally he found it. The box was in the pen. 
The word pen may have two meanings, the first meaning something you use to 
write with, the second meaning a container of some kind. To a human, the meaning 
is obvious, but he claimed that without a "universal encyclopedia" a machine would 
never be able to deal with this problem. Today, this type of semantic ambiguity can 
be solved by writing source texts for machine translation in a controlled language 
that uses a vocabulary in which each word has exactly one meaning. 
THE 1960s, THE ALPAC REPORT AND THE SEVENTIES 
Research in the 1960s in both the Soviet Union and the United States concen-
trated mainly on the Russian-English language pair. Chiefly the objects of translation 
were scientific and technical documents, such as articles from scientific journals. The 
rough translations produced were sufficient to get a basic understanding of the articles. 
If an article discussed a subject deemed to be of security interest, it was sent to a 
human translator for a complete translation; if not, it was discarded. 
A great blow came to machine translation research in 1966 with the publication 
of the ALPAC report. The report was commissioned by the US government and 


Materiallar 
                                                                                                                             07 may 2011-ci il
 
 
341 
performed by ALPAC, the Automatic Language Processing Advisory Committee, a 
group of seven scientists convened by the US government in 1964. The US gover-
ment was concerned that there was a lack of progress being made despite significant 
expenditure. It concluded that machine translation was more expensive, less accurate 
and slower than human translation, and that despite the expenses, machine translation 
was not likely to reach the quality of a human translator in the near future. 
The report, however, recommended that tools be developed to aid translators -  
automatic dictionaries, for example - and that some research in computational lin-
guistics should continue to be supported. 
The publication of the report had a profound impact on research into machine 
translation in the United States, and to a lesser extent the Soviet Union and United 
Kingdom. Research, at least in the US, was almost completely abandoned for over 
a decade. In Canada, France and Germany, however, research continued. In the US 
the main exceptions were the founders of Systran (Peter Toma) and Logos (Bernard 
Scott), who established their companies in 1968 and 1970 respectively and served 
the US Dept of Defense. In 1970, the Systran system was installed for the United 
States Air Force and subsequently in 1976 by the Commission of the European Com-
munities. The METEO System, developed at the Université de Montréal, was installed 
in Canada in 1977 to translate weather forecasts from English to French, and was 
translating close to 80,000 words per day or 30 million words per year until it was 
replaced by a competitor's system on the 30th September, 2001. 
While research in the 1960s concentrated on limited language pairs and input, 
demand in the 1970s was for low-cost systems that could translate a range of technical 
and commercial documents. This demand was spurred by the increase of globalization 
and the demand for translation in Canada, Europe, and Japan. 
THE 1980S AND EARLY 1990s 
By the 1980s, both the diversity and the number of installed systems for machine 
translation had increased. A number of systems relying on mainframe technology 
were in use, such as Systran, Logos, and Metal. 
As a result of the improved availability of microcomputers, there was a market 
for lower-end machine translation systems. Many companies took advantage of this 
in Europe, Japan, and the USA. Systems were also brought onto the market in China, 
Eastern Europe, Korea, and the Soviet Union. 
During the 1980s there was a lot of activity in MT in Japan especially. With the 
Fifth generation computer Japan intended to leap over its competition in computer 
hardware and software, and one project that many large Japanese electronics firms 
found themselves involved in was creating software for translating to and from 
English (Fujitsu, Toshiba, NTT,Brother, Catena, Matsushita, Mitsubishi, Sharp, Sanyo, 
Hitachi, NEC, Panasonic, Kodensha, Nova, and Oki). 
Research during the 1980s typically relied on translation through some variety 
of intermediary linguistic representation involving morphological, syntactic, and 
semantic analysis. 


Yüklə 3,6 Kb.

Dostları ilə paylaş:
1   ...   146   147   148   149   150   151   152   153   154




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə