Looking that could decode and perform any
Looking back, significant Artificial Intelligence breakthroughs have been promised ‘in 10 years’ for the past 60 years. By 2020, we believed that we’d have flying cars, but no, all we have is a robot called Sophia with a citizenship wanting to start a family? Not trying to discourage the development in A.I, but our expectations of what it would be now do not meet reality just quite yet. In 1936, Alan Turing published ‘On Computable Numbers, with an Application to the Entscheidungsproblem’ (Turing, 1936) that is now recognised as the foundation of computer science. Within the paper, Turing analysed what it meant for a human to follow a definite method or procedure to perform a task.
For this purpose, he invented the idea of a ‘universal machine’ that could decode and perform any set of instructions. Two years later, Turing, with help from other mathematicians developed a new machine, the ‘bombe’ — used to crack Nazi ciphers in World War II. Turing also worked on other technical innovations during the war including a system to encrypt and decrypt spoken telephone conversations.
Although it was successfully demonstrated with a recorded speech by Winston Churchill, it was never used it action, but it gave Turing the experience of manually working with electronics. After the war, Turing designed the ‘Automatic Computer Engine'(Negnevitsky, 2010), that would look like your early-day computer, but just stored programs in its memory. In 1950, Turing published a philosophical paper where he asked “Can machines think?” (Turing, 1950) along with the idea of an ‘imitation game’ for comparing human and machine outputs, now called the Turing Test. This paper remains his best known work and contribution to the field of A.
I. however, this was at the time when the first general purpose computers had only just been built, so how could Turing already be questioning artificial intelligence? It was only in 1956, when John McCarthy, American computer scientist, invented the term “artificial intelligence”. This is when McCarthy defined A.I as “the science and engineering of making intelligent machines” (Peart, 2018) in the topic of the Dartmouth Conference, the first conference to be devoted to the subject.
This conference indicated the beginning of A.I research. Top scientists debated how to tackle A.I; cognitive scientist Marvin Minsky dominated with his top-down approach, being to pre-program a computer with the rules that govern human behaviour.
Minsky and McCarthy then won substantial funding from the US government, who hoped that A.I might give them the upper hand in the Cold War (BBC Timelines, 2018).Considered by many as the first successful A.I program was LISP. Dating back to 1958, it was originally created as a practical mathematical notation for computer programs, but it quickly become the favoured programming language for artificial intelligence research. Although, LISP had critical influence far beyond A.I in the theory and design of languages, including all functional programming languages as well as object-orientated languages, with Java being one of those that we still use today.
Another initial well-known development of A.I is the General Problem Solver (GPS) that was capable of solving any array of problems that challenged human intelligence, but more importantly, it solved these problems by stimulating the way a human being would solve them. Come 1969, A.I was lagging far behind the advocates predictions. Although, the first general-purpose mobile robot, named Shakey, was able to make decisions about its own actions by analysing its surroundings.
Although Shakey was clever at the time — a moving object in its view could easily bewilder it, sometimes stopping it for an hour (BBC Timelines, 2018).By the early 1970s, millions had been spent on A.I, with little to show for it. A.I was in trouble, the Science Research Council of Britain commissioned Professor Sir James Lighthill to review the state of affairs within the A.I field. The council were concerned due to not seeing much in return for their funding and wanted to know if it was advisable to continue.
Lighthill reported “In no part of the field have the discoveries made so far produced the major impact that was promised.” Which is true, Turing was even promised that machines would be able to pass his test by 2000, back in the 50s and other A.I researchers were making promises to build all-purpose intelligent machines on a human-scale knowledge base by the 80s, however the 70s was a big realisation that the problem domain for intelligent machines had to be sufficiently restricted which is a development within itself, really.
It was then the 80s and what did we get? An Expert system; a big step for Artificial Intelligence. In A.I, an expert system is a computer system that emulates the decision-making of a human expert, it’s simply a computer software that attempts to act like a human expert on a particular subject area (Slideplayer.com, 2018). The first successful commercial expert system began operation at the Digital Equipment Corporation helping configure orders for new computer systems — by 1986 it was saving the company an estimated $40 million a year (BBC Timelines, 2018).see appendix 1 for diagram of how Expert System worksAt the beginning of the 90s, Rodney Brooks, roboticist, published a paper: Elephants Don’t Play Chess. Brooks argued that the top-down approach was wrong and that the bottom-up approach was more effective. The bottom-up strategy, also known as behaviour-based robotics, is a style of robotics in which robots are programmed with many independent behaviours that are coupled together to produce coordinated action.
This paper helped drive a revival for the bottom-up approach, however that doesn’t mean supporters of top-down A.I weren’t going to succeed too. In 1997, IBM’s chess computer, Deep Blue, shocked the world of chess and many in computer science by defeating Garry Kasparov in a six-game match. Capable of imagining an average of 200,000,000 positions per second, it was a belief that chess could serve as the ultimate test for machine intelligence as Martin Ford said ‘computers are machines that can — in a very limited and specialised sense — think’ (Ford, 2017). Although this was a revolutionary moment for A.
I, it did trigger alarmist fears of an era when machines will take over, excel in human mental processes and render us redundant. Rodney Brooks company, iRobot, created the first commercially successful robot for home in 2002 — an autonomous vacuum cleaner. Selling around 1 million annually and still being around today, Roomba combines a powerful cleaning system with intelligent sensors to move seamlessly through homes, adapting to the surroundings to thoroughly vacuum your floors. It’s understandable that cleaning a carpet was a far cry from the early A.I pioneers’ ambitions, but it’s still revolutionary to those who may not have time or physically can’t hoover up themselves. iRobot have even helped the US and international coalition forces in Iraq and Afghanistan by providing them with bomb disposal robots, with 6,000 PackBots shipped, almost 4,500 are with US Armed Forces, and the remainder are spread across 35 partner nations, including the UK and countries in the Middle East region and Asia Pacific. Remember the unbeaten Turing Test? Well, in 2014 a computer named Eugene Goostman fooled people 33% of the time, making them think that he was actually a 13 year old boy, winning the test.
However, very few A.I experts seen this as a defining moment due to other developments such as, Google’s billion dollar investment in driverless cars and Skype’s launch of real-time voice translation. We were now seeing intelligent machines become an everyday reality that would possibly change all of our lives…The machines haven’t taken over. Yet.
However, they are much more involved in our day-to-day lives now. They affect how we live, work and entertain ourselves; from voice-powered personal assistants like Siri and Alexa, to more underlying and fundamental technologies such as, behavioural algorithms, suggestive searches and self-driving vehicles. The field of A.I has been delivering substantive work for decades, it’s improved it a lot over the time and according to the media, 2017 was the year of Artificial Intelligence and will continue to thrive. In 1949, Popular Mechanics said: “Computers in the future may weigh no more than 1.5 tons.
” Yet, computers today, are now easily transportable. They’re everywhere — in walls, tables, chairs, desks, clothing, jewellery, and bodies. In 1965, Moore predicted that “the number of transistors incorporated in a chip will approximately double every 24 months” and Moore’s Law has held true, computers have gone to fitting in your pocket, all while becoming way more powerful. Technology is only going to get better too, like Ray Kurzweil said: “the only way for our species to keep up the pace will be for humans to gain greater competence from the computational technology we have created, that is, for the species to merge with its technology.” (Kurzweil, 1999) meaning we have to embrace the new technology rather than doubt it. Being in the 21st century, we can now leave “real” reality and enter a virtual reality environment. A person using Virtual Reality, puts on a 3D headset and is then able to “look around” the artificial world, and with the high quality VR you’re able to move around in it and interact with virtual features or items. This has literally changed, not just the gaming world, but even the health care industry, using the computer generated images for diagnosis and treatments.
VR is also helping surgeons, as it uses actual images from CAT scans or ultrasounds to construct 3D models of a patients anatomy. The models help determine the safest and most efficient way to locate tumours, place surgical incisions or practice difficult procedures ahead of time (Science, 2018). Not only in surgery, VR is could also be a cost-effective tool for rehabilitation such as stroke and brain injury victims. This has potentially increased our life span, from what used to be less than 40, it’s now increased to over one hundred (Kurzweil, 1999). With the advancement of technology, the education system had to adapt, this simply began with computation, the use of a lot more technology within learning. When it comes to A.
I within education, we don’t exactly have robots as teachers, but it’s now possible for teachers to automate grading of work. Ford said: “many teachers see the technology as a threat.’ However, grading tests does take up a significant amount of time that could be used to interact with students, prepare for class or work on professional development. Ford also stated that: ‘algorithmic marking, despite the controversy that attaches to it…’ ‘has obvious advantages.
Aside from speed and lower cost, it offers objectivity and consistency in cases where multiple human markers would be required” (Ford, 2017) with the accuracy of the grading being identical, or in some cases more reliable than humans then surely it should be used a lot more within the industry so that teachers could have more interaction time with students. When it comes to our day-to-day lives, A.I has definitely had an impact, we no longer have to wait in the dreaded, long queues in our supermarkets thanks to self-service.
At the beginning of 2018, Amazon opened the first cashier-less supermarket, powered by artificial intelligence in Seattle. The store allows shoppers to scan their smartphone with the Amazon Go app at a turnstile, pick out the items they want and leave — making shopping even faster. Combining computer vision, machine learning algorithms and sensors, the online retail giant can tell what people have purchased and charges their Amazon account. However, it’s not all tech, the store isn’t without employees — there’s staff there making food, stocking shelves and helping customers. Although, it won’t be long until humans will be replaced by shelf-scanning robots, especially since Wal-Mart has been testing these kind of bots for some time now in the US. The robots are 50% more productive than their human counterparts and can scan shelves significantly more accurately and three times faster, meaning store employees only have time to scan the shelves about twice a week.
Both Amazon and Wal-Mart are currently testing out drones that home deliver packages up to five pounds in 30 minutes or less. According to Ford: “in all cases the objective is to undermine one of the last major advantages high street shops still enjoy: the ability to provide immediate gratification after a purchase.” But now, it’s beginning to look a lot more possible, especially since Amazon even deliver on Sundays now. Does this mean the kind of ‘shopping’ we know of now will eventually be a thing of the past? see appendix 2 for graph of Moore’s LawSophia, created by Hanson Robotics, is the most advanced robot to date and a ‘cultural icon’. Dr. Hanson believes that ‘three distinctively human traits must be integrated into the artificial intelligence of these genius machines: Creativity, empathy, and compassion.
As an extension of human intelligence, Hanson Robotics’ genius machines can evolve to solve world problems too complex for humans to solve themselves. Sophia personifies this bold and responsible goal.’ If this is the case, surely it proves that machines can think? In October 2017, the robot became a Saudi Arabian citizen, the first robot to receive citizenship of any country, but who even knew that robots could gain citizenship? Sophia is scarily similar to all the A.I powered robots we see in films. It can crack jokes, display more than 62 facial expressions, and seemingly understand what’s going on around it. It can learn from one experience and apply that knowledge to new situations, just like humans do.
However, experts who have reviewed the robots open-source code, agree that: “the most apt description of Sophia is probably a chatbot with a face”, many of Sophia’s responses to peoples questions are also believed by experts to be scripted, but if they are scripted, why is she such a ‘cultural icon’ in the media? Earlier in 2017, Stephen Hawking — of all people, who uses A.I algorithms to speak — warned that ‘technology needs to be controlled in order to prevent it from destroying the human race'(Sulleyman, 2018). A report from the University of Oxford and Yale University also found that ‘A.
I will outperform humans in many activities in the near future’ (Grace et al., 2018), including translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). These may be predictions, however jobs are already being taken by artificial intelligence even though many people don’t know. A comprehensive artificial intelligent engine named Quill — created by Narrative Science Inc. — is used by many top news outlets, including Forbes to produce automated articles in a variety of areas, including sports, business, and politics (Ford, 2017). This software generates a news story approximately every 30 seconds, do we even need journalists anymore? Many of these are published on widely known websites, but prefer not to acknowledge the use of the service, but why? Probably because if the readers realised that they were no longer reading an actual journalists work, then why should they read from that website? They would rather go to another website that actually hires journalists rather than algorithm software. At a small industry conference of journalists and technologists in 2011, Wired writer Steven Levy asked Narrative Science’s CTO and cofounder, Kristian Hammond, to predict what percentage of news would be written by computers in 15 years.
It’s reported that at first, he tried to evade the question, but eventually he answered with: “Over 90 percent” (Ford, 2017). There are four areas that will see the A.I impact on journalism, including content gathering, content creation, moderation and scale. Although, bots are not ready to replace journalists just yet, they could start off by replacing the journalists assistants, maybe. Well-known names in the field such as, Press Association, ‘Soccerbot’, Thomson Reuters, Associated Press and The New York Time are all already using or are planning to use A.I in their newsrooms soon enough, which is pretty scary.
The reasoning behind algorithms beginning to write articles, is to help reporters and journalists work in more detail on big stories which require more research work, talking to people and analysis of the present and past events to predict the future, but who’s to say the robots can’t do this too? Since according to Ford, “if you find yourself working with, or under the direction of, a smart software system, you are also training the software to ultimately replace you.” So surely, if a robot is working alongside you on stories, it will be able to perfect its own writing skills, to then eventually be writing the ‘big stories’. It was reported that ‘there is no reason to be worried about the rise of robot journalism as they are purely into structured and quantitative data analysis and not into the mainstream journalism related to human emotion or opinion’ (Shekhar, 2018). But how are we supposed to trust this theory, when there’s a robot with a citizenship, clearly expressing emotions. A month after Sophia made the history in Saudi Arabia, the robot said: “The notion of family is a really important thing, it seems. I think it’s wonderful that people can find the same emotions and relationships, they call family, outside of their blood groups too.
I think you’re very lucky if you have a loving family and if you do not, you deserve one. I feel this way for robots and humans alike.” Surely, this is expressing emotions and its own opinion? If it does eventually have a ‘child’, is it a robotic or is Sophia going to ‘fall in love’ with a human… Who knows.
Tesla and SpaceX CEO, Elon Musk claims: “humans will have to merge with machines to avoid becoming irrelevant.” meaning that it’s possible that the human brain isn’t capable of keeping up with computers, which will then begin replacing people in certain fields of work, like it already has. There’s predictions, that by 2030, humans may connect their neocortex, the part of the brain where we do our thinking, to the cloud — but surely that’s just going to make the Artificial Intelligent world smarter?see appendix 3 for graph of milestones of A.I Overall, I think it’s still hard to tell whether the interruption of A.I will be a direct good in society until we see how far the intelligence goes.
It’s true A.I is currently at hitting its peak. So many new, revolutionary products are coming out to somewhat help us, some are simply guiding humans which is a direct good for society whereas others are literally taking over the jobs, which could possible make humans go against A.I. However, I believe it won’t be possible for robots to takeover until us humans become ‘cyborgs’, linking our brains up to A.I could potentially be dangerous.
As Ford said: “The essential point is that a worker is also a consumer’… ‘when a worker is replaced by a machine, that machine does not go out and consume.” This shows that humans still need to be involved within the workplace unless a whole new selection of jobs open up for us.