In 1948, the British mathematician Alan Turing, a member of the informal cybernetic group Ratio Club, first used the terms “smart machine” and “machine intelligence.” During World War II, he led a classified group of cryptographic mathematicians who cracked the famous military encryption Machine Enigma code.
Decades of A.I. experience allowed British mathematicians to put forward a hypothesis at the level of fiction – “Can machines think?” In 1950, in the philosophical monthly “Thought” in the article “Computing Machines and Mind,” he finally formulated his famous empirical test Turing. To successfully pass the test, the computer must understand the natural language, reason as a person, and self-learn. Together, these challenges reflect the significant challenges facing artificial intelligence theory, and the Alan Turing test is the main criterion for assessing artificial intelligence. Its modification is a reverse test of Alan Turing and all familiar CAPTCHA. Since 1991, the U.K. has hosted the annual AI Loebner competition for the Alan Turing test.
In 2001, three Russian programmers from St. Petersburg – Vladimir Veselov, Evgeny Demchenko, and Sergei Ulasen created the interlocutor program. In 2012, at the competition in honor of the 100th anniversary of Alan Turing, their virtual 13-year-old from Odessa Eugene Gustman managed to convince 29% of judges that he is a man. In 2014, the program was recognized as the first software to pass the Turing test.
In 2014, the film about Alan Turing, “The Imitation Game” – a historical drama about a military cryptographer who cracked the German encryption machine “Enigma” during the Second World War and brought the victory day to two years. The film, based on the biographical book “Alan Turing: Enigma,” was repeatedly nominated for leading film awards, won an Oscar for best screenplay, and entered the top 10 films of 2014. Interesting fact: Benedict Cumberbatch, nominated for best actor, and Alan Turing (the character he played in the movie “The Imitation Game”) – distant relatives.
- Artificial intelligence as a concept
- Artificial intelligence and neural networks
- Machine learning: algorithms to extract knowledge from data
- The use of artificial intelligence – where they use
- Artificial intelligence – movies and T.V. series
- Artificial intelligence is the future of the industry
- How artificial intelligence works
- Artificial Intelligence – Who Will Be First
- When artificial intelligence becomes commonplace
Artificial intelligence as a concept
The term “artificial intelligence” appeared in 1956. Still, the development and application of A.I. technologies have peaked in popularity only in the last decade due to the considerable increases in data volumes, improvements in algorithms, and optimization of computing power and storage systems.
There is no clear definition of artificial intelligence in terminology and conceptual apparatus. Thus, the American computer scientist John McCarthy in the 50s of the 20th century defined A.I. as “the work of machines, similar to the manifestation of the human mind.”
Other definitions of A.I. (Artificial intelligence):
An algorithm for self-learning, research, and application of found results to solve any possible tasks;
Science and high technology to create intelligent machines and computer programs;
The ability of the application process to detect properties associated with reasonable human behavior;
A computer science section that deals with the simulation of a person’s thinking with a computer.
Artificial intelligence technologies and algorithms are at the intersection of various scientific fields: machine learning, mathematics, physics, statistics, probability theory, psychology, linguistics, and human brain research. A.I. allows computers to learn from their own experience, adapt to the parameters set, and perform tasks that are not previously possible without the human factor.
Artificial intelligence and neural networks
Artificial Neural Networks are mathematical models that describe and model non-linear relationships between neurons’ signals (electrically excitable cells). The human brain is a multitasking computer: at 20 W/Per second, it performs about a billion operations (1000 petaflops). For comparison, the Chinese supercomputer “Tianhe-2” carries 33.86 petaflops per second and consumes 17.6 MW.
Artificial Neural Networks are mathematical models that describe and model non-linear relationships between neurons’ signals (electrically excitable cells). The human brain is a multitasking computer: at 20 W/Per second, it performs about a billion operations (1000 petaflops). For comparison, the Chinese supercomputer “Tianhe-2” carries 33.86 petaflops per second and consumes 17.6 MW.
The neural networks of the human brain are constantly changing and updated as experience is learned and accumulated. This human brain model has become a template for computer simulation – artificial neural network (INS). In terms of data processing and complex problem solving, artificial intelligence outperforms all traditional software algorithms. But there is also a drawback: even the most optimized models operate on the principle of “black boxes,” not allowing us to understand and explore AI itself’s decision-making mechanisms. The opacity of functioning inside the box and the impossibility of accurate predictions of self-learning consequences is still one of the critical ethical problems of artificial intelligence.
Machine learning: algorithms to extract knowledge from data
Machine learning is considered to be one of the fundamental aspects of artificial intelligence. Intelligent machines accumulate and interpret incoming data for later self-learning. Today it is the most advanced business tool in the field of artificial intelligence.
Machine learning is considered to be one of the fundamental aspects of artificial intelligence. Intelligent machines accumulate and interpret incoming data for later self-learning. Today it is the most advanced business tool in the field of artificial intelligence.
Deep Blue and DeepMind are two multitasking programs using artificial intelligence. Deep Blue uses an initially programmed set of algorithms that are not related to machine learning. In the spring of 2016, artificial intelligence achieved serious success: AlphaGo DeepMind beat the world champion in Go with deep understanding. DeepMind is a typical example of machine learning: the algorithm self-learns in the process of a diversified list of possible moves previously made by the game’s former champions.
One of the most common types of machine learning is deep learning using artificial intelligence technologies and neural networks that mimic the human mind’s decision-making algorithm.
For example, for a deep learning system to “understand” a fox’s appearance, you will need a maximum of images for detailed learning of the program of difference from the rest of his family’s mammals. Deep learning also has business applications. You can take a considerable amount of data – millions of images, and use them to identify specific characteristics. Text search, fraud detection, X-rays, handwriting recognition, image search, speech recognition, translation – all of these tasks can be accomplished with deep learning. For example, Google’s deep learning networks have been replaced with “rules-based systems that require manual work.”
The use of artificial intelligence – where they use
The development and application of artificial intelligence enable new technologies to contribute to all areas of life positively. According to scientists, the most extended machine will learn to do surgery and conduct mathematical research. Let’s mention briefly only the central regions and compelling examples of the use of A.I.
In medicine,
Artificial intelligence will soon help doctors solve one of the most challenging tasks: to restore paralyzed patients’ motor activity. Intel and Brown University employees are actively working on a project of an intelligent interface for the spine – to replace some of the spinal cord functions with the help of the A.I. interface. The Intel neural network will take over the transmission of nerve impulses to the paralyzed parts of the body, and the new technology with the help of electrodes will create a “bypass” of the damaged area of the spine. In Russia, “Medicine” successfully uses an artificial intelligence neural network to diagnose at the X-ray stage. The database of more than 200,000 X-rays is constantly being supplemented. The accuracy of the A.I. system in describing the image in tandem with the doctor is 95-98%.
In defense,
The U.S. Army has been developing artificial intelligence to recognize faces with thermal imaging in the dark and even behind physical barriers. Another A.I. algorithm controls unmanned fighters and conducts aerial combat, and aiming systems for tanks are already able to see camouflaged targets.
In Finance,
Research and Market Analysis, Personal Finance Management and Financial Portfolio Management, Algorithmic Trading, and more are areas of A.I. application. In the banking industry, artificial intelligence processes large amounts of data to improve customer service and generates personal offers through optimal communication channels.
In five years, A.I. solutions will be introduced by more than 325,000 retail brands. Demand forecasting and automated marketing will give retailers more flexibility in pricing analysis and forecasting and maximize their revenue due to demand predictability.
In transport and logistics,
Artificial Intelligence is used to optimize transport systems and road traffic, which will reduce emissions by reducing waiting times and rationally organizing traffic. The main problem of A.I. transport systems is the complexity of road infrastructure and large arrays of constantly changing information. Ai systems of driverless cars are actively developing.
In the field of human resources,
In recruitment, artificial intelligence is used to study resumes and pre-selection of potentially successful candidates, and chatbots solve routines and tasks. Online and phone customer service, facial recognition, voice, and emotion.
In everyday life and catering
One of the year’s deals in 2019 was the purchase of “McDonald’s” startup on machine learning. The fast-food giant in self-service kiosks began to use artificial intelligence to automatically customize the menu depending on the weather, news, traffic situation, and other factors. In 2019, Microsoft signed a contract with the Finnish company Four to produce the world’s first whiskey based on a unique formulation, creating artificial intelligence. According to the company’s plan, they will get the perfect strong alcohol of the premium brand, worthy of the highest international awards.
In the arts and media,
Facebook composes perfectly in size, rhythm, and rhyme. Yandex has launched Autopoet, and voice assistants of search engines Alice and Siri help find in the stream information most relevant to the user’s search queries, accumulating their history for further processing and analysis.
Artificial intelligence – movies and T.V. series
Hollywood always responds quickly to the demands of time – the start of the films about A.I. put the American series “Artificial Intelligence.” Subsequently, the trending theme continued the movie “Artificial Intelligence: Access Unlimited,” “Hello, Artificial Intelligence,” “Eva: Artificial Intelligence,” and “Artificial Intelligence.” And in 2020, there will be another “Artificial Intelligence” directed by Bel Falcon.
Artificial intelligence in all its manifestations viewers watched in the movie “The Matrix,” “I, the robot,” and in “Terminator” A.I. Skynet – a supercomputer of the U.S. Department of Defense for the management of the missile defense system. Artificial intelligence is no worse than producers who have learned to predict the planned blockbuster’s box office and audience.
In the Hollywood film industry, the decision to release a new film is modeled by A.I. programs. It is difficult to predict success or failure in the box office; even the car, on the screen, come remakes of blockbusters like “Transformers,” which A.I. predicted approximate box office fees for the tenth part.
And as long as the computer mind predicts the franchise’s commercial interest, the successful earlier project will be repeated. Significantly, even the quality of filming and editing is not a priority because the plot and ways of its presentation are modeled by artificial intelligence based on the previous blockbuster parameters: the audience, the release date of the film, the cast.
Artificial intelligence is the future of the industry
Machine learning technologies are standard in discrete production (air, engineering, and instrumentation) – this is 44% A.I. In 2nd place – oil production with petrochemicals and refining, metallurgy, chemistry – 22% of projects. 11% of artificial intelligence projects are energy-related.
A few years ago, the leading trend of evolution in industrial production was integrated automation systems of enterprises. Big business preferred ready-made solutions based on robust and distributed software tools to control full-cycle production. Artificial intelligence provides the ability to analyze in real-time, supporting enterprises’ functionality, including changing the goals of management and the sudden transformation of the object under the influence of environmental parameters. Today, intelligent systems are trained to quickly alter the algorithms of industrial enterprise management and find the most effective solutions to emerging problems.
The following aspects characterize artificial intelligence in the workplace:
- There is no precise algorithm for operational management and coordination of the company’s units;
- The deep-stately hidden intra-system links have not been fully explored.
- A wide variety of data collection solutions
- Options for analyzing diverse information (video, texts, etc.); A wide range of possibilities not predetermined in advance.
The introduction of artificial intelligence in enterprises does not imply a radical revolution in business processes. Current market A.I. solutions allow reaching a qualitative level, improving the existing functionality of industrial productions. Artificial intelligence allows to gradually expand the range of production processes with its participation, coordinating their handling.
How artificial intelligence works?
Any production cycle using artificial intelligence can be imagined as a combination of the simplest elements – single-tasking agents. Simultaneously, their variety and number of agents of each species vary from the type of tasks, the time of their solution, and the experience accumulates with specific artificial intelligence. Agent types:
- Mechanisms are responsible for collecting and processing information, monitoring the condition of equipment and personnel;
- Coordinators guarantee the interaction of agents within the artificial intelligence algorithm; Search engines accumulate local/global information, determine internal connections of production processes, give out final results;
- Training conceptually summarize the accumulated experience of technological processes and experts, collect information in the desired field of artificial intelligence;
- Decision-makers offer conclusions in limited selection settings and help create coaching for production systems and human staff.
Artificial Intelligence – Who Will Be First
A new “race” for the right to lead artificial intelligence is already well underway in the world. In parallel with the arms race, all the world’s leading powers are seriously concerned about the coming technological changes. But no one in the world has a holistic picture of where the development of artificial intelligence (A.I.) systems will lead us.
On December 17, 2019, the Center for New American Security (CNAS) published a report, “American Age of A.I.: An Action Plan,” that will define the economic, military, and geopolitical power of nations in the coming decades. According to the authors, now the world is at the center of a technological tsunami, and artificial intelligence will become the most important technological innovation in the near future.
China, the European Union member states, Japan, South Korea, and Russia, are increasing spending on A.I. research, actively training specialists, and already have A.I. strategies that threaten U.S. technological benefits. China sees advances in artificial intelligence as a means of surpassing the United States in the economic and military fields, declaring its intention to become a world leader in artificial intelligence by 2030.
One of the report’s authors believes that the great powers’ era of competition has returned, with technology at the center. The United States should immediately respond to the challenge, as it did in the case of the space race, although the number of participants was not as large as it is now. He was confident that the AI-leading country would dominate the 21st century. Another expert, Bob Wark, who was underscore of defense under Presidents Obama and Trump, also said, “Both the Russians and the Chinese have concluded that the technological breakthrough is provided through artificial intelligence.”
Russian President Vladimir Putin is sure of this: “If someone can provide a monopoly in the field of artificial intelligence (the consequences we all understand) – he will become the master of the world. The struggle for technological leadership, especially in artificial intelligence, has already become a global competition field. The developed countries of the world have already adopted their plans to develop such technologies. And we must, of course, ensure technological sovereignty in the field of artificial intelligence.”
On October 11, 2019, the national strategy for developing artificial intelligence in Russia until 2030 was approved. In the federal program “Digital Economy,” artificial intelligence is named one of the most critical “end-to-end” digital technologies. The U.S. ($70 billion) has become the leader in artificial intelligence investments in 2019. Microsoft has named Russia an international leader in introducing artificial intelligence, ahead of the United States and Western Europe. According to Microsoft, in Russia, 30% of top managers actively use A.I. at a global average of 22.3%.
Leaders prioritized A.I. – 32%’s introduction, developing innovative business ideas – 26% and market research – 25%. Russians also expressed high readiness to learn in the field of artificial intelligence: 90% (in the world – 67.3%) want to attract A.I. professionals. There is a growing demand for specialists in the A.I. field: A.I. engineers, machine training and testing, and monitoring of A.I. are required. The U.S. and European countries are also promoting comprehensive state programs in cooperation with leading companies in the industry. Artificial intelligence training is based on leading universities and research centers. China has officially announced the opening of A.I. faculties in 35 universities. In Russia, over the past two years, more than 50 universities have started training in special programs of artificial intelligence, and by the end of 2020, the Moscow State University will establish the Institute for Advanced Research on Artificial Intelligence and Intelligent Systems.
When artificial intelligence becomes commonplace?
Artificial intelligence has become a familiar theme in I.T., economy, and business, just as the Internet, cellular, and cloud technologies once became. Analyzing artificial intelligence problems as its capabilities deepen and differentiate, experts are increasingly asking: what will remain of man and whether a person will remain after the strong introduction of A.I. in all social spheres?
Artificial intelligence differs from robotic processes through hardware, although it allows automating periodic learning and data search processes. The critical goal of A.I. development is not manual labor automation but a stable, permanent computerized implementation of many global tasks. Data automation still requires human involvement to set systems and set tasks competently to solve artificial intelligence problems.
At the current development of science in 50-100 years, artificial intelligence can be compared with the human brain in several competencies and even overtake people. This opinion was shared by the head of development of the Japanese robot Parlo with artificial intelligence company FujiSoft Incorporated Naoka Sugimoto: “I think that when artificial intelligence surpasses a person, will get his mind to the point that it will be possible to confuse it with a person, it will no longer be artificial intelligence, not a robot, but a new person.”
In any case, at the moment, the problems of artificial intelligence portrayed in non-fiction in images of anthropomorphic (humanoid) robots, but in reality, existing in its infancy, so far do not inspire hope for their quick solution shortly. It’s just a figurative name, which the latest high technologies have come up with people who tend to endow all the inanimate anthropomorphic traits. Humanizing the same bots like Alice Yandex reconciles a person to understand that all this is nothing more than a sublimation of communication. Therefore, no robot or A.I. can ever become more dangerous to society than the natural human mind.