Книжная полка Сохранить
Размер шрифта:
А
А
А
|  Шрифт:
Arial
Times
|  Интервал:
Стандартный
Средний
Большой
|  Цвет сайта:
Ц
Ц
Ц
Ц
Ц

Обучение чтению литературы на английском языке по специальности «Высокопроизводительные компьютерные процессы и технологии»

Покупка
Новинка
Артикул: 840505.01.99
Доступ онлайн
480 ₽
В корзину
Пособие содержит оригинальные тексты на английском языке об основных этапах развития и принципах работы суперкомпьютеров и сетей определенного типа, а также о новейших исследованиях применения суперкомпьютеров в современных технологических процессах. Кроме того, в пособие включены задания и упражнения, позволяющие овладеть терминологией и языковыми оборотами, необходимыми для понимания и перевода научно-технической литературы. Для студентов 3-го курса факультета «Информатика и системы управления».
Авдеева, О. В. Обучение чтению литературы на английском языке по специальности «Высокопроизводительные компьютерные процессы и технологии» : учебно-методический комплекс / О. В. Авдеева, Т. П. Смирнова. - Москва : Изд-во МГТУ им. Баумана, 2007. - 40 с. - Текст : электронный. - URL: https://znanium.ru/catalog/product/2166532 (дата обращения: 08.09.2024). – Режим доступа: по подписке.
Фрагмент текстового слоя документа размещен для индексирующих роботов. Для полноценной работы с документом, пожалуйста, перейдите в ридер.
 

Московский государственный технический университет 
 имени Н.Э. Баумана 
 
 
 
 
 
 
 
О.В. Авдеева, Т.П. Смирнова 
 
 
 
Обучение чтению литературы 
на английском языке 
по специальности 
«Высокопроизводительные 
компьютерные процессы 
и технологии»  
 
 
 
Учебно-методическое пособие 
 
 
 
 
 
 
 
Москва 
Издательство МГТУ им. Н.Э. Баумана  
2007 

 

УДК 802.0 
ББК 81.2 Англ-923 
        А18 
Рецензенты: Ю.М. Козлов, Н.В. Пальченко  
Авдеева О.В., Т.П. Смирнова 
Обучение чтению литературы на английском языке по специальности «Высокопроизводительные компьютерные процессы 
и технологии»: Учеб.-метод. пособие. – М.: Изд-во МГТУ 
им. Н.Э. Баумана, 2007. – 40 с.  
  

Пособие содержит оригинальные тексты на английском языке об основных этапах развития и принципах работы суперкомпьютеров и сетей 
определенного типа, а также о новейших исследованиях применения суперкомпьютеров в современных технологических процессах. Кроме того, 
в пособие включены задания и упражнения, позволяющие овладеть терминологией и языковыми оборотами, необходимыми для понимания и 
перевода научно-технической литературы.  
Для студентов 3-го курса факультета «Информатика и системы 
управления». 
                                                                               УДК 802.0 
ББК 81.2 Англ-923 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
         
© 
МГТУ им. Н.Э. Баумана, 2007 

А18 

ПРЕДИСЛОВИЕ 

Пособие содержит основные и дополнительные тексты по специальности, задания и упражнения, позволяющие усвоить и закрепить необходимый лексический материал, отработать перевод 
грамматических конструкций. 
В оригинальных текстах из американской технической литературы отражены новейшие тенденции высокопроизводительных 
компьютерных процессов и технологий. Осуществляя лексикограмматический анализ этих текстов, студенты приобретают навыки понимания и перевода научно-технической литературы соответствующего профиля. 
В конце каждого из шести разделов пособия помещены словари с основной лексикой. 
Пособие предназначено для студентов факультета «Информатика и системы управления», обучающихся по специальности 
«Высокопроизводительные компьютерные процессы и технологии». 

 

UNIT 1 

TASK 1. Read, translate and retell the text. 

Text 1A. The Next Generation 

What is required for the next generation of computers is not simply 
jamming an ever-larger number of ever-smaller components onto a 
piece of silicon – that line of effort is costing more and yielding proportionately less speed. What’s required is a complete rethinking of the 
way computers are built and the way they process information. From 
von Neumann’s first computer to the world’s fastest single processor, 
the Cray 1 supercomputer, computers have always done tasks essentially one at a time. The central processing unit, or CPU, fetches a pair 
of numbers from memory, adds them and replaces them in memory. 
Processing speed is limited not by the speed of the CPU but by the narrow pathway between the CPU and memory. Shuttling operands and 
results back and forth is like trying to condense all the traffic on a freeway at rush hour down to one lane. Computer scientists call the resulting traffic jam “the von Neumann bottleneck”. Researchers working to 
build the next generation of computers are counting on parallelism (using many processors, each of which works on a piece of a larger problem) to detour around the von Neumann bottleneck and achieve revolutionary increases in speed. 
Progress in developing serial computers, and the difficulty of coordinating more than one CPU, forced parallelism to the background. The 
first truly parallel computers emerged in the 1970s. 
Two trends spurred the research forward: a need for more processing power and the revival of artificial intelligence research. By copying 
some of the brain’s processing strategies, researchers hope to build 
computers capable not only of solving large scientific problems but also 
of reproducing human intellect.  
Computer evolution having main milestones, it is often broken up 
into generations. 
First Generation. From the first computer to the IBM 650, 1947 to 
1956, computers relied on vacuum tubes as their basic switching element, and for memory, they used magnetic drums and cathode ray 
tubes. 

These early machines were capable of processing about 10,000 instructions per second, similar to the speed of a modern personal computer, and storing 2,000 alphanumeric characters. 
Second Generation. Typical machines were the IBM 7094 and the 
Control Data Corporation’s CDC-6600. This was the era of discrete 
transistors and magnetic core memories, 1957 to 1963. Performance 
was in the neighborhood of 200,000 instructions per second with storage for 32,000 characters. 
Third Generation. Stretching from 1964 to 1981, this generation 
marked the introduction of the computer on a chip, as well as specialized processor and memory chips for mainframe supercomputers. The 
IBM 360 and 370 series, the Cray 1 supercomputer, and Control Data’s 
Cyber 205 are all third generation machines. Average performance for 
these machines is 5 million instructions per second with main memory 
capacities of at least two million characters. Peak processing speeds, 
often cited by the manufacturer, can be considerably higher. 
Fourth Generation. The 1980s is the decade of the parallel supercomputer. Cray’s X-MP, a four-processor machine, was one of the first 
next-generation computers, delivered in 1982. Its successor, the Cray 2, 
is believed to be the first machine to execute 1 billion arithmetic operations per second. Control Data has also introduced a parallel multiprocessor system, Cyberplus, that can combine and integrate the power of as 
many as 64 high-performance CPU’s. Such a system, Control Data 
claims, would be capable of peak execution rates of 44 billion instructions per second, a huge leap in processing power that won´t be verified 
in actual tests until someone contracts to buy such a machine. 
Fifth Generation. The Japanese undertook effort to develop artificial intelligence machines, begun in 1981, “The Fifth Generation Project”. 
The promises of the Fifth Generation read like science fiction: machines that can process terabit of knowledge – not numbers but actual 
concepts, ideas, and images – per second; programs that encode encyclopedic volumes of knowledge in a particular field and thereby solve 
problems too complex or obscure for human specialists; computers that 
design and program other computers (that, in effect, have the power of 
reproduction); machines that learn from their mistakes and from experience; and computers that interact with humans vocally and in human 
language, without the need for mechanical input. 

The focus of this text is to assess the present state of nextgeneration computer research and to outline what might realistically be 
expected in the next five or 10 years. 

TASK 2. Find in the text English equivalents for: 

способ обработки; полное переосмысление; ограниченная пропускная способность; ведет к уменьшению скорости; основные вехи; 
вакуумные лампы и электронно-лучевые трубки; так же, как и; 
средняя производительность; емкость основной памяти; предприняли попытку; одно задание за один прием; отодвинуть на второй 
план параллельное использование нескольких ЦПУ; компьютеры 
появились. 

TASK 3. Answer the questions. 

1. What tasks should be solved to create the next generation of 
computers? 2. What is a supercomputer? 3. What is known to you about 
von Neumann’s creative work? 4. What is CPU and what for is it? 5. 
What was the natural sequence of steps in creating the first supercomputer? 6. Is it possible to copy processing strategies of our brain with 
the help of computer? 7. What is the difference between the work of 
our brain and computer? 8. What is known to you about the first generation of computers? 9. What should be marked in the third and the 
fourth generations of computers? 10. What might reasonably be expected in the nearest future in the development of computers? 

TASK 4. Read and translate the words, paying attention to the stress: 

´process n – ´process v – ´processing; ´increase n – in´crease v – 
in´creasing; ´program n – ´program v – ´programmer; cont´rol n – 
cont´rol v; eff´ect n – eff´ect v; ´access n – ´access v; ´intellect n – 
´intell´ectual n; ´object n – ob´ject v; ´forecast n – ´forecast v; ´frequent a – 
fre´quent v; ´conduct n – con´duct v; ´progress n – prog´ress v. 

TASK 5. Translate the sentences, paying attention to modal verbs. 

1. A new website “Electronic book” of complaints appeared in the 
net of St. Petersburg where the townspeople could tell us about bad 
work of the officials. 2. We should know that the most powerful supercomputer in the world called “Earth Simulator” has been built in Japan. 
3. Computer designers ought to keep in mind the fact of the influence 

of electromagnetic radiation of ultra low frequency on the people sitting 
not in front of the computer but nearby behind it or on the right or on 
the left of it. 4. Designers of Intel Corp. assured that Pentium 4 was to 
reach the frequency of 3.8 GHz and was intended for the bus of 800 
GHz. 5. Computers would be used in our everyday life being the source 
of information, a means of communication, a device of calculation 
simulation and storing data. 6. Everyone needs to have a computer always at hand. 7. Our country must invest enormous sums of money into 
the development of the advanced computer technology. 8. The USA 
had to apply supercomputer to develop the bomb Common Vehicle to 
be dropped from space (5000 km high). 

TASK 6. Render into Russian. Find the Gerund. 

1. Developing the most powerful computer in the world so called 
Big Blue, company IBM is going to use operating system Linux as a 
basis. 2. In the future computer will be so called “closed box” where 
it’ll be rather difficult to change software or hardware. Interacting computer systems will be greater and they’ll start deciding instead of its 
owner (user). 3. Big Blue producing 11 billion operations per second 
and being the most promising, Japanese public organization intends to 
buy it for scientific purposes, i.e. for conducting bio- and nanotechnological investigations. 4. We are glad of Japanese successful overcoming of boundary in creating taste with the help of supercomputer. 
Tasting being complex combination of the feeling of food in the mouth 
with chemical and sound signals, it makes the process of simulating 
fairly hard. 5. Japan having achieved leading positions in science and 
technology, it would like to apply this supercomputer with 2636 processors for selecting suitable materials for superconducting devices.  
6. 5 years ago having implanted electrodes linked with the computer 
into the brains of paralyzed patients doctors of the Institute of Artificial 
Intellect taught them to move the cursor across the screen by means of 
the thought. Thus, being paralyzed people can communicate or even 
control necessary devices. 7. We are sure to say that a computer is a 
device that will substitute a man since the time of laughing at the jokes 
of its boss and shift its bugs on another computer. 8. Swedish company 
Electrolux is developing a new refrigerator being equipped with a tiny 
digital camera transmitting picture on the mobile phone. 

TASK 7. Read the text and make up a plan for rendering it. 

Text 1B. A Man Before His Time: Charles Babbage 
It was more than a century before a machine capable of doing 
mathematics more complex than basic arithmetic was developed. 
Charles Babbage, in 1823, was commissioned by the British Chancellor 
of the Exchequer to design a machine to solve sixth-degree polynomials – 
a+bN+cN2+dN4+eN4+fN5+gN6 – primarily for the purpose of calculating astronomical tables more accurately. Babbage had trouble constructing a working prototype and eventually abandoned the project, but 
it was revived by Pehr Georg Scheutz of Sweden, who built two improved Babbage-design Difference Engines with Babbage’s help. 
From 1833 to the end of his life in 1871, Babbage was consumed 
with developing a general-purpose machine, a mechanical computer 
with truly revolutionary speed and scope. His Analytical Engine, had it 
worked, would have been the world’s first programmable digital computer, complete with a memory and printer. It was to have been a parallel machine as well, performing arithmetic on as many as 50 decimal 
digits at one time. What ultimately defeated the project was that Babbage’s theory, his design, was too advanced for the technology of his 
day (a problem that has doomed many projects since then). It was impossible to have mechanical parts machined accurately enough for them 
to work together smoothly. Tiny imperfections in rods, wheels, ratchets 
and gears would compound as the parts were assembled into components that groaned and threatened self-destruction. 
The Analytical Engine was designed as a digital, or counting, computer. Each of its inputs was accounted for by a click of the ratchet, 
much the same way that a clock counts seconds and compiles them into 
minutes and hours. Other non-electronic, digital computers were built 
after Babbage, but even when they functioned properly, they were slow. 
Provided Ch. Babbage had created a computer in his life-time, what 
level of the development had our society reached nowadays? 

Essential Vocabulary 

average  n  – среднее   
on an/the average – в среднем 
character n – символ  
claim  n  – требование, претензия 
decimal  a  – десятичный 

detour  n  – обход 
discrete  a  – дискретный, раздельный, состоящий из разрозненных 
частей 
doom  v  – предназначать, обрекать, предопределять 
emerge  v  – появляться; всплывать; выходить 
fetch  v  – выбирать, извлекать 
inspiration  n  – вдохновение; влияние, воздействие 
interact  v  – взаимодействовать 
mainframe – основной, главный компьютер 
obscure  a  – неотчетливый; неизвестный 
pathway n – магистраль 
process  v  – обрабатывать 
punch  v  – перфорировать 
rely  v  – полагаться на; зависеть от 
retrieve  v  – восстанавливать, возвращать в прежнее состояние 
revival  n  – восстановление 
scope  n  – границы, пределы; область видимости 
smoothly  adv  – беспрепятственно, без помех 
shuttle  v  – курсировать; перемещать 
spur  v  – вдохновлять, побуждать 
surface  v  – покрывать 
terabit n – терабит = 1012 the 12th power of 10 
verify  v  – проверять, контролировать 
yield  v  – производить, приносить 

UNIT 2 

TASK 1. Read and translate the text. Give the gist. 

Text 2A. Scalability 

Given an application and a parallel computer, how much can we 
boost the number of processors in order to improve performance? 
How much can we increase the amount of data and still have the same 
performance? Scalability is an informal measure of how the number 
of processors and amount of data can be increased while keeping reasonable speedup and efficiency. Unlimited, absolute scalability is obviously unreasonable: it would be like expecting that the design principles needed to build a car could be extended to build a car that 

travels as fast as an airplane. Too many parameters change if the size 
of a system radically changes and the design has to obey different 
principles. 
Relative scalability, that is the property of maintaining a reasonable 
efficiency while slightly changing the number of processors, is instead 
possible and indeed very useful. This scalability allows users to adapt 
their system to their needs without having to replace it. In general, most 
parallel processors are scalable in this sense unless they already use a 
number of processors that saturates some of the system's resources. 
Changing the number of processors to execute the same problem 
faster causes, sooner or later, a decrease in efficiency because each 
processor has too little work to do compared to the overhead. If, on the 
other hand, the size of the problem, i.e. the amount of data, processed, 
also grows, the efficiency can be held constant. If, instead, the problem 
size grows while the number of processors remains constant, efficiency 
also grows unless the increase in the amount of data should saturate 
some system resources, e.g. the memory. This is a very important consideration because it implies that making a very efficient use of a parallel processor is possible if we are willing to apply it to a sufficiently 
large problem. 
Scalability could be characterized by a function that indicates the 
relationship between number of processors and amount of data at constant efficiency. For example, if, when processors are doubled the 
amount of data needs to be doubled in order to keep the same efficiency, then the scalability is rather good. If, instead, data need to quadruple to keep the efficiency constant, the system is less scalable. Too 
large an increase in problem size in order keep efficiency constant is 
not a good characteristic because both the user might not need to process such a large problem and the system resources and design might not 
be able to deal with a very large problem. 
Being able to keep efficiency constant by scaling the problem size 
is a very good property, unfortunately not all problems can be scaled to 
take advantage of a better efficiency. In some cases it might be possible 
to “batch” a few instances of a problem together and generate a larger 
problem, in other cases, e.g. weather forecasting, it is useful to solve a 
larger problem. In other cases, e.g. sensory problems like speech recognition, solving a larger problem does not make sense and we have to do 
with either a low efficiency or a low speedup, or both. 

Доступ онлайн
480 ₽
В корзину