Map of the application of artificial intelligence technologies: medicine, education, transport and other areas. Nosov N.Yu., Sokolov M.D. Trends in the development of artificial intelligence

You've probably heard of the robot that comes up and gives you a can of cola when you tell it you're thirsty. You've also probably heard of the speech recognition system that controls your home appliances? And you have probably heard about aircraft simulators that will help you recreate the real environment of an aircraft flight?

In 1956, world-famous American scientist John McCarthy coined the term that is at the heart of all these possibilities and many more. The term he coined was "Artificial Intelligence". Artificial intelligence, AI for short, is the science and engineering that works to create intelligent machines as well as intelligent computer programs that are capable of responding like a human. That is, the creation of such machines that can feel the world around them, understand conversations and make decisions similar to human choices. Artificial intelligence has given us everything from a scanner to robots in real life.

Today, the field of artificial intelligence can be described as a soup of cognitive computing, psychology, linguistics and mathematics, waiting for a lightning flash - an attempt to combine the efforts of researchers and resources, develop new approaches, use the world's knowledge repositories to create a spark such that it will create a new life form.

In the realm of artificial intelligence, we nurture a child's machine from childhood to adulthood, in such a way that we create entirely new approaches to machine learning.

Branches of artificial intelligence

John McCarthy identified some of the branches of AI which are described below. He also noted that several of them have yet to be identified.

Artificial Intelligence Logic: the AI ​​program must be aware of facts and situations.

Pattern recognition: when the program makes an observation, it is usually programmed for recognition and pattern matching. For example, a speech recognition system or a face recognition system.

Performance: there must be a way to present facts about the world to an AI device. For representation, mathematical language is used.

Conclusion: inference, allows you to extract new facts from existing facts. From some facts others can be inferred.

Planning: The planning program begins with facts and a statement of purpose. From them, the program generates a strategy to achieve the goal.

Availability common sense and reasoning- this active field of research and study of AI arose in the 1950s, but still the result is still far from the human level.

Epistemology is the possibility of learning and gaining knowledge by the device. Allows you to learn the types of knowledge required for a particular type of task.

Heuristic is a way of trying to find an idea embedded in a program.

genetic programming– automatic creation of a LISP (List Processing) program that allows solving the problem.

Tools used to solve complex problems in the creation of AI

Over the past six decades, there have been various tools developed to solve complex problems in the field computer science. Some of them are:

Search and optimization

Most problems in AI can be theoretically solved with intelligent search. possible solutions. But a simple exhaustive search is rarely useful and sufficient for most real-world problems. In the 1990s, different kinds search engines became popular, which were based on optimization. For most problems, you can make a guess and then refine your request. Various optimization algorithms have been written to help the search process.

Logics

Logic allows the study of arguments. In AI it is used to represent knowledge and also used to solve problems. Various types of logic are used in artificial intelligence research. First order logic uses quantifiers and predicates and helps in representing facts and their properties. Fuzzy logic is a kind of first-order logic that allows you to find the truth of a statement, which will be represented as 1 (true) or 0 (False).

Probability theory

Probability is a way of expressing knowledge. This concept has been given a mathematical meaning in probability theory, which is widely used in AI.

Artificial intelligence and its application

Artificial intelligence is currently being used in a wide range of fields, including simulation, robotics, speech recognition, finance and stocks, medical diagnostics, aviation, security, gaming, and more.

Let's take a closer look at some of the areas:

gaming Sphere: There are machines that can play chess on professional level. AI is also applicable to various video games.

Speech recognition: Computers and robots that understand language at the human level have AI built into them.

Simulators: Simulation is an imitation of some real thing. It is used in many contexts, from video games to aviation. The simulators include flight simulators for pilots, which are used to prepare for piloting the "airship".

Robotics: Robots have become commonplace in many industries as robots have proven to be more efficient than humans, especially in repetitive jobs where humans tend to lose focus.

Finance: Banks and other financial institutions rely on intelligent software to provide accurate data analysis and help make predictions based on that data.

The medicine: AI systems are being used in hospitals to manage patient schedules, ensure staff rotation, and provide medical information. An artificial neural network, which is a mathematical model inspired by a structure and/or functional aspects biological neural networks, helps in medicine in determining the diagnosis.

Artificial intelligence finds use in various fields and applications. Security systems, text and speech recognition systems, data mining, filtering Email from spam and a huge number of other examples. A British telecommunications group has applied heuristics to a scheduling application that schedules over twenty thousand engineers. The application of AI has also found a place in the trucking industry, where fuzzy logic controllers have been developed for automatic transmissions in cars.

Problems faced by the creators of artificial intelligence

Over the past six decades, scientists have been actively working on simulating human intelligence, but growth has slowed down due to many problems in simulating artificial intelligence. Some of these problems are:

Knowledge base: the number of facts a person knows is simply too much. Preparing a database that will contain all the knowledge of this world is a huge time-consuming task.

Deduction, reasoning and problem solving: AI must solve any problem step by step. As a rule, people solve problems based on intuitive judgments, and then draw up an action plan, a program. Artificial intelligence is making slow progress to mimic the human method of problem solving.

Natural Language Processing: Natural language is the language that people speak. One of the main challenges AI faces is recognizing and understanding what people are saying.

Planning: Planning tends to only limit people because they can think. The ability to plan and think like a human is essential for intelligent agents. Like humans, they should be able to visualize the future.

Positive aspects of using AI

Already, we can see small applications of artificial intelligence in our home. For example, smart TV, smart refrigerator, etc. In the future, AI will be present in every home. Artificial intelligence with nanotechnology or other technologies can lead to the emergence of new branches in the field of science. Surely, the development of artificial intelligence will lead to the fact that it will become part of our Everyday life. Already, people are being replaced by robots in some workplaces. In the military industry, artificial intelligence will allow the creation of various modern weapons, such as robots that will reduce mortality in the event of wars.

Disadvantages of using AI

Despite the fact that artificial intelligence has many advantages, there are a lot of disadvantages.
On a more basic level, the use of artificial intelligence in everyday tasks can lead to the formation of laziness on the part of the person, and this can lead to the degradation of the general population.

The use of artificial intelligence and nanotechnology in the military industry certainly has many positive aspects, such as creating an ideal protective shield against any attacks, but there is also a dark side. With the help of artificial intelligence and nanotechnology, we will be able to create very powerful and destructive weapons, and if used carelessly, they can lead to irreversible consequences.

The massive use of artificial intelligence will lead to a reduction in jobs for people.

In addition, the rapid pace of development and application of artificial intelligence and robotics can push the Earth towards an ecological disaster. Even now, the waste of computer components and other electronic devices cause great harm to our planet.

If we give minds to machines, they will be able to use it to the maximum. Machines with intelligence will become smarter than their creators, and this could lead to the outcome shown in the Terminator movie series.

Conclusion and future application

Artificial intelligence is an area in which a lot of research continues. Artificial intelligence is a branch of computer science about understanding the nature of intelligence and building computer systems capable of intelligent actions. Even though people have intelligence, they are not able to use it to the fullest extent possible. Machines will be able to use 100% of their intelligence if we give them that intelligence. This is an advantage as well as a disadvantage. We depend on machines for almost every application in life. Cars are now part of our lives and are used everywhere. Thus, we must know more about machines and must be aware of the future, what can happen if we give them intelligence. Artificial intelligence cannot be good or bad. It changes in the way we use it.

From the moment when artificial intelligence was recognized as a scientific direction, and this happened in the mid-50s of the last century, the developers of intelligent systems have had to solve many problems. Conventionally, all tasks can be divided into several classes: human language recognition and translation, automatic theorem proving, creation of game programs, image recognition and machine creativity. Let us briefly consider the essence of each class of problems.

Proof of theorems.

Automated theorem proving is the oldest application of artificial intelligence. A lot of research has been carried out in this area, which resulted in the appearance of formalized search algorithms and formal representation languages, such as PROLOG - a logical programming language, and predicate calculus.

Automatic proofs of theorems are attractive because they are based on the generality and rigor of logic. Logic in a formal system implies the possibility of automation, which means that if you represent the task and additional information related to it as a set of logical axioms, and special cases of the task as theorems that require proof, you can get a solution to many problems. Systems of mathematical justification and automatic proofs of theorems are based on this principle. In past years, repeated attempts have been made to write a program for automatic proofs of theorems, but it has not been possible to create a system that allows solving problems using a single method. Any relatively complex heuristic system could generate many irrelevant provable theorems, with the result that programs had to prove them until the right one was found. Because of this, the opinion has arisen that large spaces can only be dealt with with the help of informal strategies specially designed for specific cases. In practice, this approach turned out to be quite fruitful and was, along with others, the basis of expert systems.

At the same time, reasoning based on formal logic cannot be ignored. A formalized approach allows solving many problems. In particular, using it, you can manage complex systems, check the correctness of computer programs, design and test logical circuits. In addition, researchers in automatic theorem proving have developed powerful heuristics based on the evaluation of the syntactic form of logical expressions. As a result, it became possible to reduce the level of complexity of the search space without resorting to the development of special strategies.

Automatic theorem proving is also of interest to scientists for the reason that for particularly complex problems it is also possible to use the system, although not without human intervention. Currently, programs often act as assistants. Specialists break the task into several subtasks, then heuristics are thought out to sort out possible reasons. Next, the program proves lemmas, checks less essential assumptions, and makes additions to the formal aspects of the proofs outlined by the person.

Pattern recognition.

Pattern recognition is the selection of essential features that characterize the initial data from the total set of features, and on the basis of the information received, the assignment of data to a certain class.

The theory of pattern recognition is a branch of computer science whose task is to develop the foundations and methods for identifying and classifying objects (objects, processes, phenomena, situations, signals, etc.), each of which is endowed with a set of certain features and properties. In practice, it is necessary to identify objects quite often. A typical situation is recognizing the color of a traffic light and deciding whether to cross the street at the moment. There are other areas in which object recognition cannot be dispensed with, for example, the digitization of analog signals, military affairs, security systems, and so on, so today scientists continue to actively work on creating image recognition systems.

The work is carried out in two main directions:

  • · Research, explanation and modeling of the recognition abilities inherent in living beings.
  • · Development of theoretical and methodological foundations for creating devices that would allow solving individual problems for applied purposes.

The formulation of recognition problems is carried out using a mathematical language. While the theory of artificial neural networks is based on obtaining results through experiments, the formulation of pattern recognition problems is not based on experiment, but on the basis of mathematical evidence and logical reasoning.

Consider the classical formulation of such a problem. There are many objects in relation to which it is necessary to classify. A set consists of subsets, or classes. Specified: information describing the set, information about classes and description of a single object without indicating its belonging to a particular class. Task: based on the available data, determine which class an object belongs to.

If there are monochrome images in the problems, they can be considered as functions on a plane. The function will be a formal record of the image and at each point express a certain characteristic of this image - optical density, transparency, brightness, etc. In this case, the model of the image set will be the set of functions on the plane. The formulation of the recognition problem depends on what the stages following recognition should be.

Pattern recognition methods include the experiments of F. Rosenblatt, who introduced the concept of a brain model. The task of the experiment is to show how psychological phenomena arise in a physical system with known functional properties and structure. The scientist described the simplest experiments on recognition, but their feature is a non-deterministic solution algorithm.

The simplest experiment, on the basis of which psychologically significant information about the system can be obtained, is as follows: the perceptron is presented with a sequence of two different stimuli, to each of which it must react in some way, and for different stimuli the reaction must be different. The purpose of such an experiment may be different. The experimenter may be faced with the task of studying the possibility of spontaneous discrimination by the system of the presented stimuli without outside interference, or, conversely, to study the possibility of forced recognition. In the second case, the experimenter teaches the system to classify various objects, which may be more than two. The learning experience goes as follows: the perceptron is presented with images, among which there are representatives of all classes to be recognized. The correct response is reinforced according to the memory modification rules. After that, the experimenter presents a control stimulus to the perceptron and determines the probability of obtaining a given response for images of this class. The control stimulus may match one of the objects presented in the training sequence or be different from all the objects presented. Depending on this, the following results are obtained:

  • · If the control stimulus differs from all previously presented training stimuli, then in addition to pure discrimination, the experiment explores the elements of generalization.
  • · If the control stimulus causes the activation of a certain group of sensory elements that do not coincide with any of the elements that were activated under the influence of stimuli of the same class, presented earlier, then the experiment explores a pure generalization and does not include the study of recognition.

Despite the fact that perceptrons are not capable of pure generalization, they cope satisfactorily with recognition tasks, especially in those cases when images are shown in relation to which the perceptron already has some experience.

Human speech recognition and machine translation.

The long-term goals of artificial intelligence include creating programs that can recognize human language and use it to construct meaningful phrases. The ability to understand and apply natural language is a fundamental feature of human intelligence. Successful automation of this ability would make computers much more efficient. To date, many programs have been written that can understand natural language, and they are successfully applied in limited contexts, but so far there are no systems that can apply natural languages ​​with the same generality and flexibility as a person does. The fact is that the process of understanding natural language is not only a simple parsing of sentences into components and searching for the meanings of individual words in dictionaries. This is exactly what the programs do well. The use of human speech requires extensive knowledge about the subject of the conversation, about the idioms related to it, in addition, the ability to understand ambiguities, omissions, professionalism, jargon, colloquial expressions and many other things that are inherent in normal human speech.

An example is a conversation about football, where such words as “forward”, “pass”, “transfer”, “penalty”, “defender”, “forward”, “captain” and others are used. Each of these words is characterized by a set of meanings, and individually the words are quite understandable, but a phrase made up of them will be incomprehensible to anyone who is not fond of football and knows nothing about the history, rules and principles of this game. Thus, a body of background knowledge is needed to understand and apply human language, and one of the main problems in automating the understanding and application of natural human language is the collection and systematization of such knowledge.

Since semantic meanings are very widely used in artificial intelligence, scientists have developed a number of methods that allow them to be structured to some extent. Yet most of the work is done in problem areas that are well understood and specialized. An example is the "microworld" technique. One of the first programs where it was used was the SHRDLU program developed by Terry Winograd, which is one of the systems for understanding human speech. The possibilities of the program were rather limited and boiled down to a “conversation” about the location of the blocks different colors and forms, as well as planning the simplest actions. The program gave answers to questions like "What color is the pyramid on the cross bar?" and could give instructions like "Put a blue block on a red one." Such problems were often touched upon by artificial intelligence researchers and later became known as the “world of blocks”.

Despite the fact that the SHRDLU program successfully "talked" about the location of the blocks, it was not endowed with the ability to abstract from this "microcosm". It used methods that were too simple to convey the semantic organization of subject areas of higher complexity.

Current work in the field of understanding and applying natural languages ​​is directed mainly towards finding sufficiently general representational formalisms that can be adapted to the specific structures of given areas and applied in a wide range of applications. Most of the existing methods, which are modifications of semiotic networks, are studied and applied in writing programs that can recognize natural language in narrow subject areas. At the same time, modern possibilities do not allow creating a universal program capable of understanding human speech in all its diversity.

Among the variety of problems of pattern recognition, the following can be distinguished:

  • Classification of documents
  • Determination of mineral deposits
  • Image recognition
  • · Barcode recognition
  • Character recognition
  • · Speech recognition
  • face recognition
  • · Number plate recognition

Artificial intelligence in gaming programs.

Game artificial intelligence includes not only the methods of traditional AI, but also the algorithms of computer science in general, computer graphics, robotics and control theory. Not only the system requirements, but also the budget of the game depend on how the AI ​​is implemented, so developers have to balance, trying to ensure that the game artificial intelligence is created at a minimum cost, while at the same time it is interesting and undemanding to resources. It uses a completely different approach than in the case of traditional artificial intelligence. In particular, emulations, deceptions and various simplifications are widely used. Example: a feature of first-person shooters is the ability of bots to move accurately and instantly aim, but at the same time, a person does not have a single chance, so the abilities of bots are artificially underestimated. At the same time, checkpoints are placed on the level so that the bots can act as a team, set up ambushes, etc. artificial intelligence image

In computer games controlled by game artificial intelligence, the following categories of characters are present:

  • mobs - characters with low level intelligence hostile to the human player. Players destroy mobs in order to pass the territory, get artifacts and experience points.
  • · non-player characters - usually these characters are friendly or neutral to the player.
  • · bots - characters hostile to the players, the most difficult to program. Their capabilities approach those of the game characters. At any given time, a certain number of bots oppose the player.

Within a computer game, there are many areas in which a wide variety of artificial game intelligence heuristic algorithms are used. The most widely used game AI is as one of the ways to control non-player characters. Another equally common method of control is scripting. Another obvious use of game AI, especially in real-time strategy games, is pathfinding, or a method to determine how an NPC can get from one point on the map to another. At the same time, obstacles, terrain and a possible “fog of war” must be taken into account. Dynamic balancing of mobs is also not complete without the use of artificial intelligence. Many games have tried the concept of unpredictable intelligence. These are games such as Nintendogs, Black & White, Creatures and the well-known Tamagotchi toy. In these games, the characters are pets whose behavior changes according to the actions of the player. The characters seem to be able to learn, when in fact their actions are the result of choosing from a limited set of choices.

Many game programmers consider any technique that creates the illusion of intelligence as part of game artificial intelligence. However, this approach is not entirely correct, since the same techniques can be used not only in game AI engines. For example, when creating bots, algorithms are used with information about possible future collisions entered into them, as a result of which bots acquire the “ability” to avoid these collisions. But these same techniques are an important and necessary component of a physics engine. Another example: an important component of a bot's aiming system is water data, and the same data is widely used in the graphics engine when rendering. The final example is scripting. This tool can be successfully used in all aspects of game development, but most often it is considered as one of the ways to control the actions of NPCs.

According to purists, the expression "game artificial intelligence" has no right to exist, as it is an exaggeration. As the main argument, they put forward the fact that only some areas of science about classical artificial intelligence are used in game AI. It should also be taken into account that the goals of AI are the creation of self-learning systems and even the creation of artificial intelligence capable of reasoning, while often limited to heuristics and a set of several rules of thumb, which are enough to create good gameplay and provide the player with vivid impressions and feeling from the game.

Currently, computer game developers are showing interest in academic AI, and the academic community, in turn, is beginning to be interested in computer games. This raises the question of the extent to which game and classic AI differ from each other. At the same time, gaming artificial intelligence is still considered as one of the sub-branches of the classical one. This is due to the fact that artificial intelligence has various application areas that differ from each other. If we talk about game intelligence, an important difference here is the possibility of cheating in order to solve some problems in "legitimate" ways. On the one hand, the disadvantage of deception is that it often leads to unrealistic character behavior and for this reason cannot always be used. On the other hand, the very possibility of such deception is an important difference between game AI.

Another interesting task of artificial intelligence is teaching a computer to play chess. Scientists from all over the world were engaged in its solution. The peculiarity of this task is that the demonstration of the logical abilities of the computer is possible only in the presence of a real opponent. The first such demonstration took place in 1974, in Stockholm, where the World Chess Championship among chess programs was held. This competition was won by the Kaissa program, created by Soviet scientists from the Institute of Management Problems of the USSR Academy of Sciences, located in Moscow.

Artificial intelligence in machine creativity.

The nature of human intellect has not yet been studied enough, and the degree of study of the nature of human creativity is even less. However, one of the areas of artificial intelligence is machine creativity. Modern computers create musical, literary and pictorial works, and the computer game and film industries have long used realistic images created by machines. Existing programs create various images that can be easily perceived and understood by a person. This is especially important when it comes to intuitive knowledge, for the formalized verification of which one would have to make considerable mental efforts. Thus, musical tasks are successfully solved using a programming language, one of which is the CSound language. Special software, with the help of which musical works are created, is represented by algorithmic composition programs, interactive composition systems, sound synthesis and processing systems.

Expert systems.

The development of modern expert systems has been carried out by researchers since the early 1970s, and in the early 1980s, expert systems began to be developed on a commercial basis. The prototypes of expert systems, proposed in 1832 by the Russian scientist S. N. Korsakov, were mechanical devices called "intelligent machines", which made it possible to find a solution, guided by given conditions. For example, the symptoms of the disease observed in the patient were analyzed, and the most appropriate medicines were suggested based on the results of this analysis.

Computer science considers expert systems together with knowledge bases. Systems are models of expert behavior based on the application of decision-making procedures and logical conclusions. Knowledge bases are considered as a set of inference rules and facts that are directly related to the chosen field of activity.

At the end of the last century, a certain concept of expert systems developed, deeply oriented towards a textual human-machine interface, which was generally accepted at that time. Currently, this concept has undergone a serious crisis, apparently due to the fact that in user applications, the text-based interface has been replaced by a graphical one. In addition, the relational data model and the "classical" view of the construction of expert systems are poorly consistent with each other. Consequently, the organization of knowledge bases of expert systems cannot be carried out efficiently, at least with the use of modern industrial database management systems. Numerous examples of expert systems are given in the literature and online sources, called "common" or "widely known". In fact, all these expert systems were created back in the 80s of the last century and by now have either ceased to exist, or are hopelessly outdated and exist thanks to a few enthusiasts. On the other hand, developers of modern software products often refer to their creations as expert systems. Such statements are nothing more than a marketing ploy, because in reality these products are not expert systems (any of the computer legal reference systems can serve as an example). Enthusiasts are trying to combine approaches to creating a user interface with "classical" approaches to creating expert systems. These attempts have been reflected in projects such as CLIPS.NET, CLIPS Java Native Interface and others, but large software companies are in no hurry to fund such projects, and for this reason, development does not move beyond the experimental stage.

The whole variety of areas in which knowledge-based systems can be applied can be divided into classes: medical diagnostics, planning, forecasting, control and management, training, interpretation, fault diagnosis in electrical and mechanical equipment, training. Let's look at each of these classes in more detail.

a) Medical diagnostic systems.

With the help of such systems, it is determined how various disturbances in the body's activity and their possible causes are interconnected. The most famous diagnostic system is MYCIN. It is used to diagnose meningitis and bacterial infections, as well as to monitor the condition of patients who have these diseases. The first version of the system was developed in the 70s. To date, its capabilities have expanded significantly: the system makes diagnoses at the same professional level as a specialist doctor, and can be used in various fields of medicine.

b) Predictive systems.

Systems are designed to predict events or the results of events based on available data characterizing the current situation or the state of an object. Thus, the program "Conquest of Wall Street", which uses in its work statistical methods algorithms, is able to analyze market conditions and develop an investment plan. The program uses the algorithms and procedures of traditional programming, so it cannot be classified as a knowledge-based system. Already today, there are programs that can predict the flow of passengers, crop yields and weather by analyzing the available data. Such programs are quite simple, and some of them can be used on ordinary personal computers. However, there are still no expert systems that could, based on market data, suggest how to increase capital.

c) Planning.

Planning systems are designed to solve problems with a large number of variables in order to achieve specific results. For the first time in the commercial sphere, such systems were used by the Damascus firm Informat. The company's management ordered 13 stations to be installed in the office lobby, which provided free consultations for buyers wishing to purchase a computer. The machines helped to make a choice that best suits the budget and wishes of the buyer. Expert systems have also been used by Boeing for such purposes as repairing helicopters, determining the causes of failure of aircraft engines, and designing space stations. DEC has created the XCON expert system, which is able to determine and reconfigure VAX computer systems to meet customer requirements. DEC is currently developing a more powerful XSEL system that includes the XCON knowledge base. The purpose of creating the system is to help consumers in the selection of a computing system with the required configuration. The difference between XSEL and XCON is that it is interactive.

d) Interpretation.

Interpretive systems are able to draw conclusions based on the results of observation. One of the most famous interpretive systems is the PROSPECTOR system. It works using data based on the knowledge of nine experts. The effectiveness of the system can be assessed by one example: using nine different methods of examination, the system discovered an ore deposit that no expert could have expected. Another known interpretive type system is HASP/SIAP. It uses data from acoustic tracking systems and, based on them, determines the location of ships in the Pacific Ocean and their types.

e) Intelligent control and management systems.

Expert systems are successfully used for control and management. They are able to analyze data received from several sources and make decisions based on the results of the analysis. Such systems are able to carry out medical monitoring and control the movement of aircraft, in addition, they are used in nuclear power plants. Also, with their help, the financial activity of the enterprise is regulated and solutions are developed in critical situations.

f) Diagnosis and troubleshooting of electrical and mechanical equipment.

Knowledge-based systems are used in cases such as:

repair of diesel locomotives, automobiles and other electrical and mechanical devices;

diagnostics and elimination of errors and malfunctions in software and hardware of computers.

g) Computer systems learning.

The use of knowledge-based systems for educational purposes is quite effective. The system analyzes the behavior and activity of the object and, in accordance with the information received, changes the knowledge base. The simplest example of such training is computer game, in which the levels become more difficult as the player's skill increases. An interesting training system - EURISCO - was developed by D. Lenat. It uses simple heuristics. The system was applied in a game simulating fighting. The essence of the game is to determine the optimal composition of the flotilla, which could inflict defeats, observing many rules. The system successfully coped with this task, including in the composition of the flotilla one small vessel and several ships capable of conducting an attack. The rules of the game changed every year, but the EURISCO system has consistently won over three years.

There are many expert systems that, according to the content of knowledge, can be attributed to several types at once. For example, a system that performs planning can also be a learning system. It is able to determine the level of knowledge of the student and, based on this information, draw up a curriculum. Control systems are used for planning, forecasting, diagnostics and control. Systems designed to protect a house or apartment can track changes in the environment, predict the development of the situation and draw up a plan for further action. For example, a window has opened and a thief is trying to enter the room through it, therefore, it is necessary to call the police.

The widespread use of expert systems began in the 1980s, when they were first introduced commercially. ES are used in many areas, including business, science, technology, manufacturing and other industries characterized by a well-defined subject area. In this context, “well-defined” means that a person can divide the course of reasoning into separate stages, and thus any problem that is within the scope of this area can be solved. Hence, similar actions can be done by a computer program. It is safe to say that the use of artificial intelligence opens up endless possibilities for humanity.

Among the most important classes of tasks that have been posed to the developers of intelligent systems since the definition of artificial intelligence as a scientific direction (since the mid-1950s), the following should be singled out: directions of artificial intelligence, which solve problems that are difficult to formalize: theorem proving, image recognition, machine translation and understanding of human speech, game programs, machine creativity, expert systems. Let's briefly consider their essence.

Directions of artificial intelligence

Theorem proof. The study of theorem proving techniques has played an important role in the development of artificial intelligence. Many informal problems, for example, medical diagnostics, use the methodological approaches that were used to automate the proof of theorems when solving. The search for a proof of a mathematical theorem requires not only deduction from hypotheses, but also making intuitions about which intermediate statements should be proved for the general proof of the main theorem.

Image recognition. The use of artificial intelligence for pattern recognition has made it possible to create practically working systems for identifying graphic objects based on similar features. Any characteristics of objects to be recognized can be considered as features. Features must be invariant to the orientation, size and shape of objects. The alphabet of signs is formed by the system developer. The quality of recognition largely depends on how well the established alphabet of features is. Recognition consists in a priori obtaining a vector of features for a separate object selected on the image and, then, in determining which of the standards of the alphabet of features this vector corresponds to.

Machine translation and human speech understanding. The task of analyzing human speech sentences using a dictionary is a typical task of artificial intelligence systems. To solve it, an intermediary language was created to facilitate the matching of phrases from different languages. In the future, this intermediary language turned into a semantic model for representing the meanings of texts to be translated. The evolution of the semantic model has led to the creation of a language for the internal representation of knowledge. As a result, modern systems analyze texts and phrases in four main stages: morphological analysis, syntactic, semantic and pragmatic analysis.

Game programs. Most game programs are based on a few basic ideas of artificial intelligence, such as enumeration of options and self-learning. One of the most interesting tasks in the field of game programs using artificial intelligence methods is teaching a computer to play chess. It was founded in the early days of computing, in the late 1950s.

In chess, there are certain levels of skill, degrees of quality of the game, which can give clear criteria for assessing the intellectual growth of the system. Therefore, scientists from all over the world were actively involved in computer chess, and the results of their achievements are used in other intellectual developments of real practical importance.

In 1974, the world championship among chess programs was held for the first time as part of the next IFIP (International Federation of Information Processing) congress in Stockholm. The winner of this competition was the chess program "Kaissa". It was created in Moscow, at the Institute of Control Problems of the USSR Academy of Sciences.

Machine creativity. One of the areas of application of artificial intelligence includes software systems that can independently create music, poetry, stories, articles, diplomas, and even dissertations. Today there is a whole class of musical programming languages ​​(for example, the C-Sound language). For various musical tasks, special software was created: sound processing systems, sound synthesis, interactive composition systems, algorithmic composition programs.

Expert systems. Artificial intelligence methods have found application in the creation of automated consulting systems or expert systems. The first expert systems were developed as research tools in the 1960s of the last century.

They were artificial intelligence systems specifically designed to solve complex problems in a narrow subject area, such as medical diagnosis of diseases. The classic goal of this direction was initially to create a general purpose artificial intelligence system that would be able to solve any problem without specific knowledge in the subject area. Due to the limited capacity of computing resources, this problem turned out to be too difficult to solve with an acceptable result.

Commercial implementation of expert systems occurred in the early 1980s, and since then, expert systems have become widespread. They are used in business, science, technology, manufacturing, as well as in many other areas where there is a well-defined subject area. The main meaning of the expression "quite definite" is that a human expert is able to determine the stages of reasoning by which any problem in a given subject area can be solved. This means that similar actions can be performed by a computer program.

Now we can say with certainty that use of artificial intelligence systems opens wide borders.

Today, expert systems are one of the most successful applications of artificial intelligence technology. Therefore, we recommend that you familiarize yourself with.

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION

International Institute "INFO-Ruthenia"

COURSE WORK

Discipline:

Research of control systems

Topic: Applications of artificial intelligence in the enterprise

Balatskaya E.N.

Introduction

2. Artificial intelligence: areas of application

3. Artificial intelligence and prospects for its development

Conclusion

Glossary


Introduction

The science of artificial intelligence dates back to the middle of the 20th century. Since that time, in many research laboratories, scientists have been working on the creation of computers that have the ability to think at the same level as a person. At that time, the prerequisites for the emergence of artificial intelligence already existed. So, psychologists created a model of the human brain and studied the processes of thinking. Mathematicians created the theory of algorithms, which became the foundation of the mathematical theory of computation, knowledge about the world was ordered and structured, questions of optimal calculations were solved, and the very first computers were created.

New machines were able to perform calculations much faster than humans, so scientists thought about the possibility of creating computers that reached the level of human development. In 1950, the English scientist Alan Turing published the article "Can a machine think?". In this article, he proposes to determine the degree of intelligence of a machine using a test he developed, which later became known as the "Turing test".

Other scientists also worked in the field of AI, but they had to face a number of problems that could not be solved within the framework of traditional computer science. It turned out that, first of all, the mechanisms of sensory perception, assimilation of information, as well as the nature of language, should be studied. It turned out to be extremely difficult to imitate the work of the brain, since for this it would be necessary to reproduce the work of billions of neurons interacting with each other. But even more challenging task than imitation of the work of the brain, it turned out to be the study of the principles and mechanisms of its functioning. This problem, faced by researchers of intelligence, affected the theoretical side of psychology. Scientists have not yet been able to come to a consensus on what intelligence is. Some consider the ability to solve problems of high complexity a sign of intelligence; for others, intelligence is, first of all, the ability to learn, generalize and analyze information; others believe that this is an opportunity to effectively interact with the outside world, the ability to communicate, perceive and comprehend the perceived information.

In this course work, the object of study is artificial intelligence. The subject of the study is the possible ways of its improvement and development.

The purpose of the work: to identify areas of human activity in which artificial intelligence can be applied.

In the course of the research conducted within the framework of this work, it is supposed to solve several problems:

) Consider the history of the emergence of artificial intelligence;

) Identify the main goals of creating artificial intelligence;

) To acquaint the reader with the types of applications of artificial intelligence in the modern world;

) Explore promising areas in which artificial intelligence can be applied;

) Consider what the future could be with artificial intelligence.

The presented course work may be of interest to everyone who is interested in the history of the emergence and development of artificial intelligence, in addition, it can be used as a teaching aid.

1. The meaning of the term "artificial intelligence"

Humanity first heard about artificial intelligence more than 50 years ago. It happened at a conference held in 1956 at Dartmouth University, where John McCarthy gave the term a clear and precise definition. “Artificial intelligence is the science of creating intelligent machines and computer programs. For the purposes of this science, computers are used as a means to understand the features of human intelligence, at the same time, the study of AI should not be limited to the use of biologically plausible methods.

Like other applied sciences, the science of artificial intelligence is represented by theoretical and experimental parts. In practice, "Artificial Intelligence" occupies an intermediate position between computer science and computing and such disciplines as cognitive and behavioral psychology and neurophysiology. As for the theoretical basis, it is the "Philosophy of Artificial Intelligence", but as long as there are no significant results in this area, the theory has no independent value. Nevertheless, already now it is necessary to distinguish between the science of artificial intelligence and other theoretical disciplines and methods (robotic, algorithmic, mathematical, physiological), which are of independent importance.

Now the development of AI is taking place in two directions: neurocybernetics and black box cybernetics. One of the areas - neurocybernetics, or artificial intelligence, is based on modeling the work of the human brain using artificial intelligence systems, known as neural networks or neural networks. The second direction of AI - black box cybernetics, or machine intelligence, is engaged in the search and development of algorithms for the effective solution of intellectual problems using existing computer models. For this direction, the main thing is not the design of the device, but the principle of its operation: the reaction of a “thinking” machine to input actions must be the same as that of the human brain.

Many books have been written about artificial intelligence, but not a single author gives an unambiguous answer to the question of what this science does. Most authors consider only one definition of AI, considering scientific achievements only in the light of this definition. The next problem concerns the nature of the human intellect and its status: in philosophy there is still no unambiguous criterion for them. There is no single approach to determining the degree of “reasonableness” of a machine. However, there are many hypotheses proposed since the dawn of artificial intelligence. This is the Turing test, which was mentioned above, and the Newell-Simon hypothesis, and many other approaches to the development of AI, of which two main ones can be distinguished:

semiotic, or top-down: based on the creation of knowledge bases, inference systems and expert systems that imitate various mental processes high level, such as thinking, emotions, speech, creativity, reasoning, etc.

biological, or ascending: it is based on the creation and study of neural networks that mimic the processes of the human brain, as well as the creation of biocomputers, neurocomputers and other similar computing systems.

The second approach goes beyond the definition given by John McCarthy, but has the same ultimate goal, so there is every reason to attribute it to the field of artificial intelligence.

In combination with cognitive psychology, epistemology and neurophysiology, artificial intelligence forms another science - cognitology. Epistemology is directly related to the problems of AI, since it is a science of knowledge (part of philosophy), and philosophy, in turn, plays an important role in artificial intelligence. Philosophers and AI engineers solve similar problems: they are both looking for best ways presentation and use of information and knowledge.

Cognitive modeling is a method proposed and first tested by Axelrod. The method is used to make decisions in insufficiently defined situations. It is based on modeling based on the knowledge of subjective ideas about the situation of one or several experts. The expert representation model is a cognitive map (F, W). W - a set of cause-and-effect relationships between situational factors, as well as a set of methods for analyzing the situation, F - all available factors of the situation. At present, the main direction in the development of cognitive modeling is the improvement of the apparatus for modeling and analyzing the situation. In particular, various methods for predicting the situation and methods for solving inverse problems are being developed.

In computer science, the solution of artificial intelligence problems is carried out using the design of knowledge bases and expert systems. Knowledge bases are a collection of knowledge and rules according to which information can be meaningfully processed. In general, the problems of artificial intelligence in computer science are studied with the aim of creating information systems, their operation and improvement. The training of developers and users of such systems is handled by specialists in the field of information technology.

It is quite natural that attempts to create artificial intelligence have attracted and continue to attract the attention of scientists-philosophers. The emergence of the first intelligent systems could not but affect many aspects relating to human knowledge, the world order and the place of man in the world. Conventionally, all philosophical problems in this area can be divided into two groups: the possibility of creating artificial intelligence and the ethics of artificial intelligence. In the first group, most of the questions are devoted to the possibility and ways of creating AI. The second group of problems is related to possible consequences advent of AI for all mankind. At the same time, in transhumanism, the creation of AI is considered one of the primary tasks facing humanity.

Scientists at the Singularity Institute (SIAI), based in the US, are actively exploring the potential for global risks that could arise from the creation of superhuman artificial intelligence. To prevent such risks, AI should be programmed to be human-friendly. In the film "I, Robot" the problem of the ethics of artificial intelligence is quite reasonably touched upon. Some scientists believe that the laws of robotics may encourage "computer intelligence" to seize power on Earth in order to "protect" the population from harm.

As for religious denominations, most of them are quite calm about the creation of AI. For example, the Buddhist spiritual leader, the Dalai Lama, believes that computer-based consciousness may well exist. The religious movement of the Raelites actively supports developments in this area. Other faiths raise issues related to AI, rarely enough to speak of a pronounced position.

Artificial Intelligence: Applications

From the moment when artificial intelligence was recognized as a scientific direction, and this happened in the mid-50s of the last century, the developers of intelligent systems have had to solve many problems. Conventionally, all tasks can be divided into several classes: human language recognition and translation, automatic theorem proving, creation of game programs, image recognition and machine creativity. Let us briefly consider the essence of each class of problems.

Proof of theorems.

Automated theorem proving is the oldest application of artificial intelligence. A lot of research has been carried out in this area, which resulted in the appearance of formalized search algorithms and formal representation languages, such as PROLOG - a logical programming language, and predicate calculus.

Automatic proofs of theorems are attractive because they are based on the generality and rigor of logic. Logic in a formal system implies the possibility of automation, which means that if you represent the task and additional information related to it as a set of logical axioms, and special cases of the task as theorems that require proof, you can get a solution to many problems. Systems of mathematical justification and automatic proofs of theorems are based on this principle. In past years, repeated attempts have been made to write a program for automatic proofs of theorems, but it has not been possible to create a system that allows solving problems using a single method. Any relatively complex heuristic system could generate many irrelevant provable theorems, with the result that programs had to prove them until the right one was found. Because of this, the opinion has arisen that large spaces can only be dealt with with the help of informal strategies specially designed for specific cases. In practice, this approach turned out to be quite fruitful and was, along with others, the basis of expert systems.

At the same time, reasoning based on formal logic cannot be ignored. A formalized approach allows solving many problems. In particular, using it, you can manage complex systems, check the correctness of computer programs, design and test logical circuits. In addition, researchers in automatic theorem proving have developed powerful heuristics based on the evaluation of the syntactic form of logical expressions. As a result, it became possible to reduce the level of complexity of the search space without resorting to the development of special strategies.

Automatic theorem proving is also of interest to scientists for the reason that for particularly complex problems it is also possible to use the system, although not without human intervention. Currently, programs often act as assistants. Specialists break the task into several subtasks, then heuristics are thought out to sort out possible reasons. Next, the program proves lemmas, checks less essential assumptions, and makes additions to the formal aspects of the proofs outlined by the person.

Pattern recognition.

Pattern recognition is the selection of essential features that characterize the initial data from the total set of features, and on the basis of the information received, the assignment of data to a certain class.

The theory of pattern recognition is a branch of computer science whose task is to develop the foundations and methods for identifying and classifying objects (objects, processes, phenomena, situations, signals, etc.), each of which is endowed with a set of certain features and properties. In practice, it is necessary to identify objects quite often. A typical situation is recognizing the color of a traffic light and deciding whether to cross the street at the moment. There are other areas in which object recognition cannot be dispensed with, for example, the digitization of analog signals, military affairs, security systems, and so on, so today scientists continue to actively work on creating image recognition systems.

The work is carried out in two main directions:

Research, explanation and modeling of the recognition abilities inherent in living beings.

Development of theoretical and methodological foundations for the creation of devices that would allow solving individual problems for applied purposes.

The formulation of recognition problems is carried out using a mathematical language. While the theory of artificial neural networks is based on obtaining results through experiments, the formulation of pattern recognition problems is not based on experiment, but on the basis of mathematical evidence and logical reasoning.

Consider the classical formulation of such a problem. There are many objects in relation to which it is necessary to classify. A set consists of subsets, or classes. Specified: information describing the set, information about classes and description of a single object without indicating its belonging to a particular class. Task: based on the available data, determine which class an object belongs to.

If there are monochrome images in the problems, they can be considered as functions on a plane. The function will be a formal record of the image and at each point express a certain characteristic of this image - optical density, transparency, brightness, etc. In this case, the model of the image set will be the set of functions on the plane. The formulation of the recognition problem depends on what the stages following recognition should be.

Pattern recognition methods include the experiments of F. Rosenblatt, who introduced the concept of a brain model. The task of the experiment is to show how psychological phenomena arise in a physical system with known functional properties and structure. The scientist described the simplest experiments on recognition, but their feature is a non-deterministic solution algorithm.

The simplest experiment, on the basis of which psychologically significant information about the system can be obtained, is as follows: the perceptron is presented with a sequence of two different stimuli, to each of which it must react in some way, and for different stimuli the reaction must be different. The purpose of such an experiment may be different. The experimenter may be faced with the task of studying the possibility of spontaneous discrimination by the system of the presented stimuli without outside interference, or, conversely, to study the possibility of forced recognition. In the second case, the experimenter teaches the system to classify various objects, which may be more than two. The learning experience goes as follows: the perceptron is presented with images, among which there are representatives of all classes to be recognized. The correct response is reinforced according to the memory modification rules. After that, the experimenter presents a control stimulus to the perceptron and determines the probability of obtaining a given response for images of this class. The control stimulus may match one of the objects presented in the training sequence or be different from all the objects presented. Depending on this, the following results are obtained:

If the control stimulus differs from all previously presented training stimuli, then, in addition to pure discrimination, the experiment explores the elements of generalization.

If a control stimulus causes the activation of a certain group of sensory elements that do not match any of the elements that were activated under the influence of stimuli of the same class presented earlier, then the experiment investigates a pure generalization and does not include the study of recognition.

Despite the fact that perceptrons are not capable of pure generalization, they cope satisfactorily with recognition tasks, especially in those cases when images are shown in relation to which the perceptron already has some experience.

Human speech recognition and machine translation.

The long-term goals of artificial intelligence include creating programs that can recognize human language and use it to construct meaningful phrases. The ability to understand and apply natural language is a fundamental feature of human intelligence. Successful automation of this ability would make computers much more efficient. To date, many programs have been written that can understand natural language, and they are successfully applied in limited contexts, but so far there are no systems that can apply natural languages ​​with the same generality and flexibility as a person does. The fact is that the process of understanding natural language is not only a simple parsing of sentences into components and searching for the meanings of individual words in dictionaries. This is exactly what the programs do well. The use of human speech requires extensive knowledge about the subject of the conversation, about the idioms related to it, in addition, the ability to understand ambiguities, omissions, professionalism, jargon, colloquial expressions and many other things that are inherent in normal human speech.

An example is a conversation about football, where such words as “forward”, “pass”, “transfer”, “penalty”, “defender”, “forward”, “captain” and others are used. Each of these words is characterized by a set of meanings, and individually the words are quite understandable, but a phrase made up of them will be incomprehensible to anyone who is not fond of football and knows nothing about the history, rules and principles of this game. Thus, a body of background knowledge is needed to understand and apply human language, and one of the main problems in automating the understanding and application of natural human language is the collection and systematization of such knowledge.

Since semantic meanings are very widely used in artificial intelligence, scientists have developed a number of methods that allow them to be structured to some extent. Yet most of the work is done in problem areas that are well understood and specialized. An example is the "microworld" technique. One of the first programs where it was used was the SHRDLU program developed by Terry Winograd, which is one of the systems for understanding human speech. The possibilities of the program were quite limited and were reduced to a “conversation” about the location of blocks of different colors and shapes, as well as planning the simplest actions. The program gave answers to questions like "What color is the pyramid on the cross bar?" and could give instructions like "Put a blue block on a red one." Such problems were often touched upon by artificial intelligence researchers and later became known as the “world of blocks”.

Despite the fact that the SHRDLU program successfully "talked" about the location of the blocks, it was not endowed with the ability to abstract from this "microcosm". It used methods that were too simple to convey the semantic organization of subject areas of higher complexity.

Current work in the field of understanding and applying natural languages ​​is directed mainly towards finding sufficiently general representational formalisms that can be adapted to the specific structures of given areas and applied in a wide range of applications. Most of the existing methods, which are modifications of semiotic networks, are studied and applied in writing programs that can recognize natural language in narrow subject areas. At the same time, modern possibilities do not allow creating a universal program capable of understanding human speech in all its diversity.

Among the variety of problems of pattern recognition, the following can be distinguished:

Definition of mineral deposits

Image recognition

Barcode recognition

Character recognition

Speech recognition

Face recognition

Vehicle number recognition

Artificial intelligence in gaming programs.

Game artificial intelligence includes not only the methods of traditional AI, but also the algorithms of computer science in general, computer graphics, robotics and control theory. Not only the system requirements, but also the budget of the game depend on how the AI ​​is implemented, so developers have to balance, trying to ensure that the game artificial intelligence is created at a minimum cost, while at the same time it is interesting and undemanding to resources. It uses a completely different approach than in the case of traditional artificial intelligence. In particular, emulations, deceptions and various simplifications are widely used. Example: a feature of first-person shooters is the ability of bots to move accurately and instantly aim, but at the same time, a person does not have a single chance, so the abilities of bots are artificially underestimated. At the same time, checkpoints are placed on the level so that the bots can act as a team, set up ambushes, etc.

In computer games controlled by game artificial intelligence, the following categories of characters are present:

mobs are characters with a low level of intelligence, hostile to the human player. Players destroy mobs in order to pass the territory, get artifacts and experience points.

non-player characters - usually these characters are friendly or neutral to the player.

bots are characters hostile to players, the most difficult to program. Their capabilities approach those of the game characters. At any given time, a certain number of bots oppose the player.

Within a computer game, there are many areas in which a wide variety of artificial game intelligence heuristic algorithms are used. The most widely used game AI is as one of the ways to control non-player characters. Another equally common method of control is scripting. Another obvious use of game AI, especially in real-time strategy games, is pathfinding, or a method to determine how an NPC can get from one point on the map to another. At the same time, obstacles, terrain and a possible “fog of war” must be taken into account. Dynamic balancing of mobs is also not complete without the use of artificial intelligence. Many games have tried the concept of unpredictable intelligence. These are games such as Nintendogs, Black & White, Creatures and the well-known Tamagotchi toy. In these games, the characters are pets whose behavior changes according to the actions of the player. The characters seem to be able to learn, when in fact their actions are the result of choosing from a limited set of choices.

Many game programmers consider any technique that creates the illusion of intelligence as part of game artificial intelligence. However, this approach is not entirely correct, since the same techniques can be used not only in game AI engines. For example, when creating bots, algorithms are used with information about possible future collisions entered into them, as a result of which bots acquire the “ability” to avoid these collisions. But these same techniques are an important and necessary component of a physics engine. Another example: an important component of a bot's aiming system is water data, and the same data is widely used in the graphics engine when rendering. The final example is scripting. This tool can be successfully used in all aspects of game development, but most often it is considered as one of the ways to control the actions of NPCs.

According to purists, the expression "game artificial intelligence" has no right to exist, as it is an exaggeration. As the main argument, they put forward the fact that only some areas of science about classical artificial intelligence are used in game AI. It should also be taken into account that the goals of AI are the creation of self-learning systems and even the creation of artificial intelligence capable of reasoning, while often limited to heuristics and a set of several rules of thumb, which are enough to create good gameplay and provide the player with vivid impressions and feeling from the game.

Currently, computer game developers are showing interest in academic AI, and the academic community, in turn, is beginning to be interested in computer games. This raises the question of the extent to which game and classic AI differ from each other. At the same time, gaming artificial intelligence is still considered as one of the sub-branches of the classical one. This is due to the fact that artificial intelligence has various application areas that differ from each other. If we talk about game intelligence, an important difference here is the possibility of cheating in order to solve some problems in "legitimate" ways. On the one hand, the disadvantage of deception is that it often leads to unrealistic character behavior and for this reason cannot always be used. On the other hand, the very possibility of such deception is an important difference between game AI.

Another interesting task of artificial intelligence is teaching a computer to play chess. Scientists from all over the world were engaged in its solution. The peculiarity of this task is that the demonstration of the logical abilities of the computer is possible only in the presence of a real opponent. The first such demonstration took place in 1974, in Stockholm, where the World Chess Championship among chess programs was held. This competition was won by the Kaissa program, created by Soviet scientists from the Institute of Management Problems of the USSR Academy of Sciences, located in Moscow.

Artificial intelligence in machine creativity.

The nature of human intellect has not yet been studied enough, and the degree of study of the nature of human creativity is even less. However, one of the areas of artificial intelligence is machine creativity. Modern computers create musical, literary and pictorial works, and the computer game and film industries have long used realistic images created by machines. Existing programs create various images that can be easily perceived and understood by a person. This is especially important when it comes to intuitive knowledge, for the formalized verification of which one would have to make considerable mental efforts. Thus, musical tasks are successfully solved using a programming language, one of which is the CSound language. Special software, with the help of which musical works are created, is represented by algorithmic composition programs, interactive composition systems, sound synthesis and processing systems.

Expert systems.

The development of modern expert systems has been carried out by researchers since the early 1970s, and in the early 1980s, expert systems began to be developed on a commercial basis. The prototypes of expert systems, proposed in 1832 by the Russian scientist S. N. Korsakov, were mechanical devices called "intelligent machines", which made it possible to find a solution, guided by given conditions. For example, the symptoms of the disease observed in the patient were analyzed, and the most appropriate medicines were suggested based on the results of this analysis.

Computer science considers expert systems together with knowledge bases. Systems are models of expert behavior based on the application of decision-making procedures and logical conclusions. Knowledge bases are considered as a set of inference rules and facts that are directly related to the chosen field of activity.

At the end of the last century, a certain concept of expert systems developed, deeply oriented towards a textual human-machine interface, which was generally accepted at that time. Currently, this concept has undergone a serious crisis, apparently due to the fact that in user applications, the text-based interface has been replaced by a graphical one. In addition, the relational data model and the "classical" view of the construction of expert systems are poorly consistent with each other. Consequently, the organization of knowledge bases of expert systems cannot be carried out efficiently, at least with the use of modern industrial database management systems. Numerous examples of expert systems are given in the literature and online sources, called "common" or "widely known". In fact, all these expert systems were created back in the 80s of the last century and by now have either ceased to exist, or are hopelessly outdated and exist thanks to a few enthusiasts. On the other hand, developers of modern software products often refer to their creations as expert systems. Such statements are nothing more than a marketing ploy, because in reality these products are not expert systems (any of the computer legal reference systems can serve as an example). Enthusiasts are trying to combine approaches to creating a user interface with "classical" approaches to creating expert systems. These attempts have been reflected in projects such as CLIPS.NET, CLIPS Java Native Interface and others, but large software companies are in no hurry to fund such projects, and for this reason, development does not move beyond the experimental stage.

The whole variety of areas in which knowledge-based systems can be applied can be divided into classes: medical diagnostics, planning, forecasting, control and management, training, interpretation, fault diagnosis in electrical and mechanical equipment, training. Let's look at each of these classes in more detail.

a) Medical diagnostic systems.

With the help of such systems, it is determined how various disturbances in the body's activity and their possible causes are interconnected. The most famous diagnostic system is MYCIN. It is used to diagnose meningitis and bacterial infections, as well as to monitor the condition of patients who have these diseases. The first version of the system was developed in the 70s. To date, its capabilities have expanded significantly: the system makes diagnoses at the same professional level as a specialist doctor, and can be used in various fields of medicine.

b) Predictive systems.

Systems are designed to predict events or the results of events based on available data characterizing the current situation or the state of an object. Thus, the Wall Street Conquest program, which uses statistical methods of algorithms in its work, is able to analyze market conditions and develop an investment plan. The program uses the algorithms and procedures of traditional programming, so it cannot be classified as a knowledge-based system. Already today, there are programs that can predict the flow of passengers, crop yields and weather by analyzing the available data. Such programs are quite simple, and some of them can be used on ordinary personal computers. However, there are still no expert systems that could, based on market data, suggest how to increase capital.

c) Planning.

Planning systems are designed to solve problems with a large number of variables in order to achieve specific results. For the first time in the commercial sphere, such systems were used by the Damascus firm Informat. The company's management ordered 13 stations to be installed in the office lobby, which provided free consultations for buyers wishing to purchase a computer. The machines helped to make a choice that best suits the budget and wishes of the buyer. Expert systems have also been used by Boeing for such purposes as repairing helicopters, determining the causes of failure of aircraft engines, and designing space stations. DEC has created the XCON expert system, which is able to determine and reconfigure VAX computer systems to meet customer requirements. DEC is currently developing a more powerful XSEL system that includes the XCON knowledge base. The purpose of creating the system is to help consumers in the selection of a computing system with the required configuration. The difference between XSEL and XCON is that it is interactive.

d) Interpretation.

Interpretive systems are able to draw conclusions based on the results of observation. One of the most famous interpretive systems is the PROSPECTOR system. It works using data based on the knowledge of nine experts. The effectiveness of the system can be assessed by one example: using nine different methods of examination, the system discovered an ore deposit that no expert could have expected. Another known interpretive type system is HASP/SIAP. It uses data from acoustic tracking systems and, based on them, determines the location of ships in the Pacific Ocean and their types.

e) Intelligent control and management systems.

Expert systems are successfully used for control and management. They are able to analyze data received from several sources and make decisions based on the results of the analysis. Such systems are able to carry out medical monitoring and control the movement of aircraft, in addition, they are used in nuclear power plants. Also, with their help, the financial activity of the enterprise is regulated and solutions are developed in critical situations.

f) Diagnosis and troubleshooting of electrical and mechanical equipment.

Knowledge-based systems are used in cases such as:

repair of diesel locomotives, automobiles and other electrical and mechanical devices;

diagnostics and elimination of errors and malfunctions in software and hardware of computers.

g) Computer systems of education.

The use of knowledge-based systems for educational purposes is quite effective. The system analyzes the behavior and activity of the object and, in accordance with the information received, changes the knowledge base. The simplest example of such learning is a computer game, in which the levels become more difficult as the player's skill increases. An interesting training system - EURISCO - was developed by D. Lenat. It uses simple heuristics. The system was applied in a game simulating combat operations. The essence of the game is to determine the optimal composition of the flotilla, which could inflict defeats, observing many rules. The system successfully coped with this task, including in the composition of the flotilla one small vessel and several ships capable of conducting an attack. The rules of the game changed every year, but the EURISCO system has consistently won over three years.

There are many expert systems that, according to the content of knowledge, can be attributed to several types at once. For example, a system that performs planning can also be a learning system. It is able to determine the level of knowledge of the student and, based on this information, draw up a curriculum. Control systems are used for planning, forecasting, diagnostics and control. Systems designed to protect a house or apartment can track changes in the environment, predict the development of the situation and draw up a plan for further action. For example, a window has opened and a thief is trying to enter the room through it, therefore, it is necessary to call the police.

The widespread use of expert systems began in the 1980s, when they were first introduced commercially. ES are used in many areas, including business, science, technology, manufacturing and other industries characterized by a well-defined subject area. In this context, “well-defined” means that a person can divide the course of reasoning into separate stages, and thus any problem that is within the scope of this area can be solved. Therefore, a computer program can perform similar actions. It is safe to say that the use of artificial intelligence opens up endless possibilities for humanity.

Artificial intelligence and prospects for its development

David Lodge's Small World, a novel about the academic world of literary criticism, describes a remarkable scene. The protagonist addresses a group of prominent literary theorists with the question of what would happen if they were right. Confusion arose among theorists. They disagreed among themselves, but no one had thought before that to argue about irrefutable theories is an occupation devoid of any meaning. If such a survey were asked to scientists researching artificial intelligence, they would probably also be confused. What would happen if they managed to achieve their goals? After all, intelligent computers already demonstrate remarkable achievements, and everyone understands that they are more useful than machines that do not have intelligence. It would seem that there is nothing to worry about. But there are a number of ethical issues that need to be taken into account.

Intelligent computers are more powerful than non-intelligent ones, but is it possible to make sure that this power is always used only for good, and not for evil? Artificial intelligence researchers who have devoted their entire lives to developments in this area should be aware of their degree of responsibility to ensure that the results of their work have only a positive impact on humanity. The degree of this influence is directly related to the degree of artificial intelligence. Even the earliest advances made in this area have had a significant impact on how computer science is taught and how software and hardware are developed. Artificial intelligence has made it possible to create search engines, robots, effective surveillance systems, inventory management systems, speech recognition, and a number of other fundamentally new applications.

According to the developers, the successes of the average level achieved in artificial intelligence can have a tremendous impact on the way of life of the population around the planet. Until now, only the Internet and cellular telephone communications have had such a pervasive influence, and the degree of influence of artificial intelligence has remained negligible. But one can imagine what benefits the appearance of personal assistants for home or office will have for mankind, and how the quality of everyday life will improve with their appearance, although at first this may entail a number of economic problems. At the same time, the technological possibilities that have opened up for humanity may lead to the creation of autonomous weapons, and its appearance, according to many, is undesirable. Finally, it may well be that success in creating artificial intelligence, superior in level to the human mind, can radically change the life of mankind. People will work differently, relax, have fun, ideas about consciousness, intellect and about the very future of mankind will change. It is easy to understand that the emergence of a superior intelligence can cause serious damage to the freedom, self-determination and existence of people. At the very least, all of these aspects may be at risk. Therefore, research related to artificial intelligence should be carried out with an awareness of their possible consequences.

What could be the future? Most science fiction novels follow pessimistic rather than optimistic scenarios, perhaps because such novels are more appealing to readers. But in reality, most likely, everything will be different. The development of artificial intelligence occurs in the same way as telephony, aeronautics, engineering equipment, printing and other revolutionary technologies developed at one time, the introduction of which brought more positive rather than negative consequences.

It is also worth noting that despite the short history of the existence of artificial intelligence, significant progress has been made in this area. However, if humanity could look into the future, it would see how little has yet been done compared to what remains to be done.

Conclusion

artificial intelligence expert

Disputes about the possibility of creating artificial intelligence do not stop in the scientific community. According to many, the creation of AI will entail a humiliation of human dignity. Speaking about the possibilities of AI, one should not forget about the need to develop and improve human intelligence.

The benefits of using AI are that it provides an impetus for further progress, and also greatly increases labor productivity by automating production. But with all the advantages, cybernetics also has some disadvantages, to which humanity should pay close attention. The main disadvantage is the danger that working with AI can cause. Another problem is that people can lose the incentive to be creative. Computers are ubiquitous in the arts and seem to be pushing people out of the arts. It remains to be hoped that skilled creative activity will continue to be attractive to man, and that the best musical, literary and pictorial works will continue to be created by people.

There is another group of problems, more serious. Modern machines and programs have the ability to adapt to changing external factors, that is, to learn. Very soon, machines will be developed with a degree of adaptability and reliability that will allow a person not to interfere in the decision-making process. This can lead to the fact that people are unable to adequately act in the event of emergency. It is also possible that in the event of an emergency, a person will not be able to assume control functions at the moment when it is necessary. This means that already now it is worth thinking about introducing some limits to process automation, especially those associated with the occurrence of severe emergencies. In this case, the person controlling the control machine will be able to respond correctly and make the right decision for a particular unforeseen situation.

Such situations can arise in the field of transport, in nuclear power and missile forces. In the latter case, a mistake can lead to dire consequences. But the probability of errors always exists and remains even in the case of duplication and multiple rechecks. This means that an operator must be present to control the machine.

It is already obvious that people will constantly have to solve problems associated with artificial intelligence, as they appear now and will appear in the future.

In this course work, the tasks of artificial intelligence, the history of its appearance, areas of application and some problems related to AI were considered. The information presented in this paper will be of interest to those who are interested in modern technologies and advances related to artificial intelligence. The objectives of this course work are fulfilled.

Glossary

№ p / n Concept Definition (scripts) in interpreted programming languages

List of sources used

Devyatkov V.V. Systems of artificial intelligence / Ch. ed. I. B. Fedorov. - M.: Publishing house of MSTU im. N. E. Bauman, 2001. - 352 p. - (Informatics at the Technical University). - 3000 copies.

Zhuravlev Yu.I. On an algebraic approach to solving problems of recognition and classification // Problems of Cybernetics. - M.: Nauka, 1978, no. 33.

McCarthy D. What is Artificial Intelligence?, - M.: 2007.

Petrunin Yu. Yu., Ryazanov MA, Saveliev AV Philosophy of artificial intelligence in the concepts of neuroscience. (Scientific monograph). - M.: MAKS Press, 2010.

Peter Jackson Introduction to expert systems. - 3rd ed. - M.: Williams, 2001. - S. 624.

Russell S., Norvig P. Artificial intelligence: a modern approach / Per. from English. and ed. K. A. Ptitsyna. - 2nd ed. - M.: Williams, 2006. - 1408 p. - 3000 copies.

Tu J., Gonzalez R. Principles of pattern recognition, - M .: 1978

Fine V. S. Image recognition, - M .: 1970

Study questions

  1. The concept of artificial intelligence
  2. AIS tools
  3. Purpose and structure of expert systems

Artificial intelligence is a scientific discipline that emerged in the 1950s at the intersection of cybernetics, linguistics, psychology and programming.

Artificial intelligence has a long history. Even Plato, Aristotle, Socrates, R. Descartes, G. Leibniz, J. Buhl, then N. Wiener and many other researchers sought to describe thinking as a set of some elementary operations, rules and procedures.

Here are some definitions of artificial intelligence published in various sources.

1. AI - symbol cybernetic systems, modeling some aspects of intellectual (reasonable) human activity: logical and analytical thinking.

2. AI - robot or computer ability to mimic human skills used for problem solving, problem learning, reasoning, and self-improvement.

3. AI is a scientific direction related to algorithm development and programs for automating activities that require human intelligence.

4. AI is one of the areas of informatics, the purpose of which is development of hardware and software, allowing a non-programmer user to set and solve their tasks, which are traditionally considered intellectual, by communicating with a computer in a limited subset of natural language.

Since the beginning of research in the field of AI, two directions have been distinguished:

AI splits into two scientific directions: neurocybernetics (or artificial intelligence) and "black box" cybernetics (or machine intelligence).

Recall that cybernetics is the science of control, communication and information processing. Cybernetics explores objects regardless of their material nature (living and non-living systems).

The first direction - neurocybernetics - is based on hardware modeling of the human brain, which is based on a large number (about 14 billion) of connected and interacting nerve cells- neurons.

Artificial intelligence systems that simulate the work of the brain are called neural networks (or neural networks). The first neural networks were created in the late 50s of the twentieth century by American scientists G. Rosenblatt and P. McCulloch.

For the second direction of AI - cybernetics of the "black box" - it does not matter what the design of the "thinking" device is. The main thing is that it reacts to the given input actions in the same way as the human brain.

Computer users quite often meet with the manifestation of artificial intelligence. For example, when working with a text editor, spelling is automatically checked (and taking into account the language used). When working with spreadsheets, you do not need to enter all the days of the week or all the months of the year. It is enough to make one or two entries, and the computer will be able to accurately complete the list. With the help of a microphone and a special program, you can control the operation of the program with your voice. When typing an email address, the browser tries to guess the address and append it. The search for information in the global network for given keywords also occurs with the involvement of AI elements. When scanning handwritten text, AI systems recognize letters and numbers.



AI ideas are used in game theory, for example, to create a computer that plays chess, checkers, reversi and other logical and strategic games.

Solving the problem using MM speech synthesis and the inverse problem - analysis and speech recognition. In most cases, AI is used to find a method for solving some problem. Mathematics is one of the main areas of application of AI methods. Symbolic mathematics (computer algebra) is one of the greatest manifestations of artificial intelligence.

The field of AI includes the tasks of pattern recognition (optical and acoustic). Identification of fingerprints, comparison of human faces are tasks of pattern recognition.

Expert systems built on the ideas of AI accumulate the experience, knowledge, skills of specialists (experts) in order to transfer them to any computer user at the right time.

The development of intelligent programs differs significantly from conventional programming and is carried out by building an artificial intelligence system.

If an ordinary PC program can be represented as:

Program = Algorithm + Data

Then AI systems are characterized by the following structure:

SII = Knowledge + Knowledge Processing Strategy

Main hallmark SII is work with knowledge.

Unlike data, knowledge has the following properties:

Internal interpretability- along with information in the knowledge base, information structures are presented that allow not only storing knowledge, but also using it.

structured– complex objects are decomposed into simpler ones and relationships are established between them.

connectedness- patterns are displayed regarding facts, processes, phenomena and cause-and-effect relationships between them.

Activity- knowledge implies the purposeful use of information, the ability to manage information processes to solve certain problems.

All these properties should ultimately provide the ability of AIS to model human reasoning when solving applied problems - the concept of a procedure for obtaining solutions to problems (knowledge processing strategies) is closely related to knowledge.

In knowledge processing systems, such a procedure is called an inference mechanism, logical inference, or inference engine. The principles of constructing an inference mechanism in SII are determined by the way knowledge is represented and the type of reasoning being modeled.

To organize interaction with AIS, it must have means of communication with the user, that is, an interface. The interface provides work with the KB and the output mechanism in a language of a sufficiently high level, close to the professional language of specialists in the application area to which the AIS belongs.

In addition, the interface functions include support for the user's dialogue with the system, which allows the user to receive explanations of the system's actions, participate in the search for a solution to the problem, replenish and correct the knowledge base.

The main parts of knowledge-based systems are:

2. Output mechanism

3. Interface with the user.

Each of these parts can be arranged differently in various systems, these differences can be in details and in principles. However, for all SIS it is characteristic simulation of human reasoning.

The knowledge that a person relies on when solving a particular problem is very heterogeneous:

Conceptual knowledge (a set of concepts and their relationships)

Constructive knowledge (knowledge about the structure and interaction of parts of various objects)

Procedural knowledge (methods, algorithms and programs for solving various problems).

Factographic knowledge (quantitative and qualitative characteristics of objects, phenomena and their elements).

A feature of knowledge representation systems is that they model human activities, often carried out in an informal way. Knowledge representation models deal with information received from experts, which is often of a qualitative and contradictory nature. For computer processing, such information must be reduced to an unambiguous formalized form. Science - logic is engaged in the study of methods of formalized representation of knowledge.

Currently, AI research has the following application orientation:

Expert systems

Automatic theorem proving

Robotics

Pattern recognition, etc.

The greatest distribution has been achieved in the creation of ES, which are widely used and are used in solving practical problems.

  1. AIS tools

The tools used to develop AIS can be divided into several types:

Programming systems in high-level languages;

Programming systems in knowledge representation languages;

Shells of artificial intelligence systems - skeletal systems;

Means of automated creation of ES.

Programming systems in high-level languages least focused on solving AI problems. They do not contain tools designed to represent and process knowledge. Nevertheless, a fairly large, but decreasing over time, share of AIS is being developed using traditional NED.

Programming systems in knowledge representation languages have special tools designed to create AIS. They contain their own means of representing knowledge (according to a certain model) and supporting inference. The development of AIS using programming systems in YaPZ is based on conventional programming technology. The most widely used logic programming language is PROLOGUE.

Means of automated creation of ES are flexible software systems that allow the use of several knowledge representation models, inference methods and interface types and contain aids creation of ES. Building an ES with the help of the considered means consists in formalizing the initial knowledge, writing it in the input language of knowledge representation and describing the rules for inferring decisions. Further, the expert system is filled with knowledge.

Shells or empty ECs are ready-made ES without knowledge base. Examples of ES shells that have been widely used are the foreign EMYCIN shell and the domestic development of Expert-micro, focused on the creation of ES for solving diagnostic problems. The technology for creating and using the ES shell is that knowledge from the knowledge base is removed from the finished expert system, then the base is filled with knowledge focused on other applications. The advantage of shells is ease of use - a specialist only needs to fill the shell with knowledge, without creating programs. The disadvantage of using shells is the possible discrepancy between a specific shell and the applied ES developed with its help.