• ভাষা:
  • English
  • বাংলা
হোম > The Emergence of Artificial Intelligence
লেখক পরিচিতি
লেখকের নাম: ফরহাদ হোসেন
মোট লেখা:১৪
লেখা সম্পর্কিত
পাবলিশ:
২০১৮ - জুন
তথ্যসূত্র:
কমপিউটার জগৎ
লেখার ধরণ:
কমপিউটার জগৎ
তথ্যসূত্র:
ইংরেজি সেকশন
ভাষা:
বাংলা
স্বত্ত্ব:
কমপিউটার জগৎ
The Emergence of Artificial Intelligence
The Emergence of Artificial Intelligence

Artificial Intelligence (AI) has various definitions, but in general it means a program that uses data to build a model of some aspect of the world. This model is then used to make informed decisions and predictions about future events. The technology is used widely, to provide speech and face recognition, language translation, and personal recommendations on music, film and shopping sites. It is also facilitating driverless cars, smart personal assistants, and intelligent energy grids. AI has the potential to make organizations more effective and efficient, but the technology raises serious issues of ethics, governance, privacy and law. Often used interchangeably, Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) are terms that can be seen everywhere. But what is what? AI, ML and DL work towards the same goal of predicting future outcomes based on past data, but they are not exactly the same.

Artificial Intelligence (AI) as a research field began back in 1956 at a Dartmouth College conference where attendees thought that a machine as intelligent as a human would be achievable within the next generation. It was not long before they realized that computer hardware limitations would stretch that timeline far beyond their initial expectations.

Machine Learning (ML) represents the practice of parsing data using algorithms, learning from it and trying to predict or decide a future outcome on a real-world problem. Currently it gets the most attention from all the subsets of AI as it looks to be the most promising form of AI for businesses. Successful machine learning systems can make predictions about an outcome and learn to recognize patterns on their own. IBM’s Deep Blue win over Garry Kasparov in 1997 was achieved using hard coded rules and was dependent on programming. As such it does not qualify as an ML system.

Deep Learning (DL) is a newer term when talking AI, deep learning is a branch of machine learning that models high level abstractions in data using a deep graph with many processing layers. One example would be the AlphaGo project from Google’s DeepMind division. It uses a tree search algorithm to find the best possible moves at any given time. The software determines if it is a good or a bad move based on millions of hours of play time that it has been trained on.

The emergence of Artificial intelligence (AI) has led to applications which are now having a profound impact on our lives. This is a technology which is barely 60 years old. Indeed, the term AI was first coined at the Dartmouth conference in 1956 as stated above. This was a time when the first digital computers were beginning to appear in university laboratories. The participants at this conference were predominantly mathematicians and computer scientists, many of whom were interested in theorem proving and algorithms that could be tested on these machines. There was much optimism at this conference, for they had been given some encouragement from early successes in this field. This led to euphoric predictions about AI that were over hyped.

Thereafter progress has been sometimes erratic and unpredictable because AI is a multidisciplinary field, which is not underpinned by any strong theories. AI software paradigms and techniques have emerged from theories in Cognitive Science, Psychology, Logic, and so on, but had not matured sufficiently – partly because of the experimental foundation upon which they were based, and partly on inadequately powerful hardware. AI programs require more powerful hardware in speed of operation and memory than conventional software. Moreover, the emergence of other technologies – such as the Internet – has impacted the evolution of AI systems. Thirty years ago, it was assumed that AI systems would become stand-alone systems, such as robots, or expert systems. But most of today’s AI applications combine technologies.

The AI field has seen a shifting focus of AI research over the last 60 years. The first phase was triggered during the Dartmouth Conference and focused on techniques involving General Problem Solving (GPS). This approach assumed that any problem that could be written in program code – be it mathematical theorem proving, chess playing, or finding the shortest distance from one city to another – could be solved. Such problems would normally involve representing this knowledge in computer readable format and then searching through possible states until a solution is found. For example, in chess playing there would be a symbolic representation of the board, the pieces, possible moves, and best moves based on heuristics of previous tournaments, and so on. During a game, the search would find the best move. However, despite showing good promise initially, the GPS approach run out of steam fairly quickly. The main reason is that the number of search combinations increase exponentially as problems increased in size. Thus, the second phase of research looked at ways to facilitate searching – to reduce or prune the search space, and also ways of representing knowledge in AI. There were some research successes of AI during this period. Most notable were Shrdlu and Shakey the robot.

However, AI was about to take a step backwards when the Lighthill report, published in the UK in 1973, was very negative about the practical benefits of AI. There were similar misgivings about AI in the US and the rest of the world. However, recognizing the possible benefits of AI, the Japanese gave it a new lease of life in 1982 with the announcement of a massive project – called the Fifth Generation Computer Systems project (FGCS). This project was very extensive covering both hardware and software that included intelligent software environments and 5th generation parallel processing, amongst other things. This project served as a catalyst for interest in AI in the rest of the world. In the USA, Europe and the UK there was a move towards building Intelligent Knowledge Based Systems (IKBS) – such systems were also called expert systems. The catalyst for this activity in the UK was the ALVEY project. This was a large collaborative project funded by the UK government and industry and commerce that looked at the viability of using IKBS in more than 200 demonstrator systems.
This paved the way for a third phase of AI research concentrating on IKBS which, unlike the GPS universal knowledge approach, relied upon specific domain-based knowledge to solve AI problems. With IKBS, a problem, such as medically diagnosing an infectious disease, could be solved by incorporating into the IKBS the domain knowledge for that problem. Such knowledge could be acquired from human experts in this domain or by some other means. This knowledge would often be written in the form of rules. The collection of rules and facts making up this knowledge was called a knowledge-base. A software inference engine would then use that knowledge to draw conclusions. IKBS made quite an impact at the time and many of these systems such as R1, MYCIN, Prospector, and many more, were, and still are in some cases, being used commercially.

However, there were some shortcomings with IKBS: these were their inability to learn and, in some cases, the perceived narrowness of their focus. The ability to learn is important because IKBS need regular updating. Doing this manually is time consuming. AI Machine learning techniques have now matured to enable systems to learn unaided with little, or no, human intervention. IKBS systems had a narrow focus because they did not have the “common sense” knowledge possessed by a human expert to draw upon. This meant that many of these systems were very competent at solving problems within the narrow confines of their domain knowledge, but crashed when confronted with an unusual problem that required them to use common sense knowledge. AI experts realized that expert systems were lacking common sense, which we humans acquire from the day we were born. This was a severe impediment to the success of AI because they were seen as brittle.

For this reason, a number of projects have been developed with a view to resolving this problem. The first was CYC. This was a very ambitious AI project that attempted to represent common-sense knowledge explicitly by assembling ontology of familiar common sense concepts; the purpose of CYC was to enable AI applications to perform human-like common sense reasoning. However, there were shortcomings identified with the CYC project – not least in dealing with the ambiguities of human language. Douglas Lenat began the project in July, 1984 at MCC (Microelectronics and Computer Consortium, the first, and - at one time - one of the largest, computer industry research and development consortia in the United States. MCC ceased operations in 2004), where he was Principal Scientist 1984-1994, and then, since January 1995, has been under active development by the Cycorp company, where he was CEO until early 2017. Parts of the project were released as OpenCyc, which provided an API, RDF endpoint, and data dump under an open source license. Other more recent approaches have drawn upon the “big data” approach, sometimes using an open source model for data capture on the Web. For example, ConceptNet captures common sense knowledge containing lots of things computers should know about the world by enabling users to input knowledge in a suitable form.
During the last few decades, Machine learning has become a very important research topic in AI. It is mostly implemented using techniques like neural networks and genetic algorithms. This represents the fourth phase of AI research.

In the short term – the next 5-15 years – AI and robotics is likely to transform the workplace, making huge numbers of human jobs redundant. Robots do not get paid, do not get tired, and do not demand better working conditions. This means that there are millions of robots likely to take the place of factory workers in the future. For example, Foxconn, a company that assembles Apple iPhone parts, is replacing 60,000 workers with robots. These are very different to the dumb robots that have been used in car plants to perform repetitive single task activities. They are more mobile, flexible, and capable of more general multiple tasking.
In the medium term, we will have to get used to machines playing a much greater part in our lives through sharing the roads with driverless cars until the day comes when human drivers are an extinct species. There will also be robots, seemingly ubiquitous, performing all sorts of general tasks reliably. Human-AI relationships will develop as simulated personalities become more convincing and intelligent devices communicate with us in natural language similar to conversing with humans. There will inevitably, be many other examples of advanced AI that will become commonplace because machine intelligence algorithms will find uses in many applications. These algorithms are likely to be everywhere dominating our lives.
In the longer term, super intelligent AI (that is intelligence above human level) is probably, according to some experts, at least 30 years away. However, when it comes, we will have capabilities to solve problems beyond our own intelligence limits and could well provide answers to problems beyond us – such as discovering technically efficient ways of providing energy, solving other resource problems, such as water availability, and so on.

There are many other possible benefits likely to unfold in an age of machine super intelligence. AI systems that can rapidly acquire large amounts of specialized knowledge will be well-suited to medical and educational applications. Kurzweil lists several of these in his future predictions. For example, we have already entered a cyborg (human augmentation) age where silicon enhances our own biological limits. Prosthetic limbs, hands and legs, are now in widespread use that provides strength and dexterity similar to that what could be attained to having actual limbs. Some recipients say that they even feel the sensation of these limbs. But this is only the start. Many people will want to enhance the limits of their biological bodies with silicon-based intelligence that can improve them physically and/or mentally.

Another one of the likely consequences of the age of AI is “mind or brain uploading” – that is mind copying to a computer. This could take the form of scanning the brain and creating a copy of that persons mind. This is known as “digital immortality” and many of the billionaire Technology gurus, such as Elon Musk, are investigating ways of doing this now. The cost of “mind uploading” will be high because the human brain contains over 100 billion neurons interconnected in thousands of ways. Of course, it is unlikely that human consciousness could ever be fully replicated by uploading from biological to electronic format because we constantly change through our lives as a result of daily experiences. But some of the essential human characteristics, such as the sound of a person’s voice, their beliefs and values, even sense of humor, could not be captured when more progress is made in this field and the computing power is available. Whatever the case, it seems certain that we will encounter huge changes in the next few decades
পত্রিকায় লেখাটির পাতাগুলো
লেখাটির সহায়ক ভিডিও
২০১৮ - জুন সংখ্যার হাইলাইটস
চলতি সংখ্যার হাইলাইটস