StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Technical Specifics of AI in the Real World - Coursework Example

Cite this document
Summary
"Technical Specifics of AI in the Real World" paper focuses on Artificial Intelligence which seeks to establish the perceptive and cognitive nature of so-called intelligent systems. It seeks to reveal the actions that intelligent systems take based on their understanding of their environment. …
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER96.8% of users find it useful

Extract of sample "Technical Specifics of AI in the Real World"

A comparison of the USA NSA PRISM Surveillance Program with its fictional representation in the CBS TV Show Person of Interest Your name: Course name: Date: Artificial Intelligence (AI) is a science of intelligence. It is the intelligence that machines or software exhibit. As an academic field of study, it seeks to establish the perceptive and cognitive nature of so-called intelligent systems. It also seeks to reveal the actions that the intelligent systems take based on their understanding of their environment. This understanding is usually based on data collected for the environment. AI has been a fascination of the human race from a long time. Evidence can be traced back to the ancient Egypt. According to Li and Du (2007, p. 1), a man living in Alexandria named Hero came up with a means to automatically give holy water to worshippers. All they had to do was insert coins into a slot and two huge bronze priests would raise their pots. Holy water would then pour out the pots into a sacrificial fire. Was this invention of automation of sorts to be considered intelligent, then AI has made major strides in progress. Essay Q to answer: Technical specifics of AI in the real world In the modern age and in real life, AI has taken various forms and techniques. This is primarily because AI research and developments have been centered on specific individuals, research companies and academic institutions. Each particular AI flavor has been geared towards a specific goal or objective. As a result, divergent AI systems have been developed based on the discipline and purpose of application. For instance, Jones (2008, p. 7), highlights the “Dendral Project” which was created at Stanford University in 1965. It was ‘developed to help organic chemists understand the organization of unknown organic molecules. It used as its inputs mass spectrometry graphs and a knowledge base of chemistry, making it the first known expert system’. Also highlighted is Macsyma which was developed at MIT by Carl Engelman, William Martin and Joel Moses. This was a computer algebra system written in MacLisp, which is a dialect of LISP that was also developed at MIT. It ‘demonstrated solving integration problems with symbolic reasoning’. Present day math applications that are used commercially base some of their features from this early mathematical system. Various intelligence characteristics are exhibited by AI systems depending on their domain. For example, perception of data. Honavar and Uhr (1994) point out that this involves interpretation, manipulation and integration of data from sensors with the systems internal operations setting the context and purpose of action. Another feature exhibited by AI is communication. AI is able to relay its interpretation of received data and explain events and actions to other intelligent agents like humans. This is achieved using sound, pictures, symbols, signals, signs and icons. The AI is also autonomous. It can formulate plans, set goals and take action to meet these goals. It is capable of modifying the actions in response to unexpected circumstances so as to achieve the set goals. This is a very crucial feature as the system can learn and easily adapt to changing environments and inherent stimuli. One of the AI systems currently deployed is the USA NSA’s PRISM surveillance program. This system was enabled during the term of President Bush by the Protect America Act of 2007 and by the 2008 amendment to the Foreign Intelligence Surveillance Act (FISA). The authority of making use of this system was granted by the Foreign Intelligence Surveillance Court. The FISA Amendments Act of 2008 (FISAA) protects companies that cooperate with governments agencies by sharing private intelligence information. It was renewed in 2012 by Congress under President Barack Obama up to 2017. The FISAA gives government intelligence agencies the authority to monitor various facets of private user data. Amongst them are phone and email correspondence. According to the act, such communication can be monitored for one year when one of the subjects is a foreigner located outside of the USA. The PRISM program does not need a warrant for each of these individuals. Instead, its activities are guided by federal judges who oversee the interpretation of the FISA. The existence of the PRISM program was leaked in 2013 by an NSA intelligence contractor named Edward Snowden. The revelations he made were published by The Guardian and The Washington Post on June 6th 2013. According to The Washington Post, the NSA and the FBI are mining data from the servers of nine US Internet companies namely Microsoft, Google, Yahoo, AOL, YouTube, Apple, PalTalk, Skype and Facebook. According to Gellman and Poitras (2013), the NSA regards the specifics about these partnerships as a very sensitive secret, whose exposure would result in the withdrawal from the program. According to the PowerPoint slides and supporting materials revealed by Edward Snowden to The Washington Post, the “Upstream” part of the PRISM program collects data from the fiber optics cable networks that carry most of the internet and phone traffic of the world and pass through the USA. The program has inputs at major chokepoints of these networks, including the undersea cables that connect North America to the rest of the world, and their respective landing points. Gellman and Linderman (2013) point that the NSA security analysts who make targeted searches and collection of data on a subject work from a web portal at Fort Meade. They enter search terms that must prove with 51 percent confidence that the subject is a foreigner. The search terms may make reference to other people by name, phone number or e-mail address. They may also refer to restricted commodities or highly organizations. The analyst then fills an electronic form to demonstrate that the purposes of the search will not capture the data of USA residents. It this profiling that the analysts supervisor will use to approve the targeted data collection. The supervisor will also check to ensure that the surveillance ‘complies with the NSA regulations and the classified judicial order interpreting Section 702 of the FISAA’. However, anyone that the suspect has been in contact with will have their data chained up. From the suspects inbox and outbox, contacts and their data will be swept in, an occurrence described as “incidental collection”. The incidental content is kept for a period of five years. If a subject turns out to be a U.S. citizen, the data collection is labeled “inadvertent” and is destroyed. The PowerPoint slides further reveal that to be able to achieve this extensive data collection, the FBI uses government equipment installed on either of the Special Source Operations (SSO) partner’s property. They are then able to retrieve information matching the criteria being used. The data is then ‘processed and analyzed by specialized systems that handle voice, text, video and “digital network information” that includes the locations and unique device signatures of targets’. The information intercepted at the premises of partnering private companies is then shared with the NSA, FBI or CIA. Notably, there can be live monitoring and notification of the suspect’s activities, depending on the partner service provider. For instance, when they login into a monitored web service, send an e-mail or text message, or engage in a voice call or chat, their correspondence will be captured. Stored communications are the ones that are used against the FBI database to ensure that the search terms do not correspond to U.S. citizens. A similar review is conducted by the NSA office of Standards and Compliance. Essay Q to answer: Technical specifics of the comparable science-fiction AI It is with the backdrop of such extensive mass surveillance using AI that a number of TV and movie productions have been made. They try to depict surveillance scenarios that could very well happen in the life of a typical citizen. A potion of their intrigue and huge the following and viewership they get is also attributed to their rendition of the future possibilities of AI, and in this case surveillance. Person of interest is one such TV show. Person of Interest is a crime drama series produced by Jonathan Nolan for CBS TV. It was first aired on September 22, 2011. It is set post-September 9/11 at a time when people are hyper-conscious of the threat of terrorism. It tells of the existence of a secret government supercomputer (AI) that is always watching every person’s activities and on the lookout for terrorists. The computer system was created by Harold Finch and Nathan Ingram on contract for a government project code-named “Northern Lights”. In the show, the system is named “The Machine” (Pedia of Interest 2014). The machine gets its data from feeds of domestic security organizations such as the NSA and foreign agencies like Interpol. It then uses this data to predict the occurrence of terrorist attacks with a very high level of success. It also modifies intelligence reports such that they reflect the “relevant” information concerns matters of national security. This information enables the government and concerned agencies pre-empt terrorist activities. The data takes the form of e-mail, voice calls and chats, video footage, text messages and electronic transactions. Once the threat has been identified, information is forwarded to the FBI or NSA, while maintaining the anonymity of The Machine. In order to maintain the integrity of the machine, and protect it from human interference, its location remains a mystery (Pedia of Interest 2014). Unknown to its creator, the Machine identifies individuals who are about to commit a crime or those who are under one or other threat, by their social security numbers. These individuals are labelled as “irrelevant”. The Machine was designed such it deletes the irrelevant numbers every night at midnight, then re-initializes itself. Such numbers are related to acts that do not threaten national security like domestic violence, petty theft, violent crimes and other premeditated crimes. One of the creators of the machine, Ingram, dies violently but the incident is labelled irrelevant by the machine. This awakens its other creator, Finch, to the relevance of the “irrelevant” numbers. He makes use of a backdoor into the Machine that Ingram had included in the design to access these numbers. It releases these numbers every time it predicts an imminent incident (Pedia of Interest 2014). The machine is so intelligent as to be able to schedule relocation of its host hardware when it suspects its location may be compromised. The machine has also been designed such that it cannot be altered remotely, but only if one has a physical connection to its hardware. It is also capable of self-diagnosis, repair and upgrade. The machine is also self-aware. At one time it created a false human identity to facilitate hiring of personnel to enter backup codes of its memory in response to a virus that was slowly killing it. It even briefly played the role of guardian to its administrator. The administrator instructed it to prioritize humanity first, and it obeyed to the latter. The administrator or a trusted asset (like John Reese) ‘can communicate with the Machine by talking into any security or traffic camera’. The camera flashes a red light to show that the Machine is processing the correspondence. The Machine then responds by dialing a pay phone and uttering a series of characters that bear a specific meaning (Pedia of Interest 2014). In the TV show, there is another similar and competing AI system called Samaritan. It was also meant to be used to predict, pre-empt and tackle terrorism after 9/11. Government funding for the Samaritan project was pulled and they were forced to shelve their work. Its creator Claypool backed up and hid the source code but it found its way into the hands of a private technology firm named Decima Technologies. Decima deployed Samaritan and convinced the government to avail NSA feeds to see the capability of the system. What followed was a demonstration of the true might of Samaritan (Pedia of Interest 2014) Just like the Machine, Samaritan identifies individuals using the data it gets from video feeds, audio and existing information. It processes it and classifies individuals as either threats to its mandates, deviants from social norms and assets that work for it. It is also able to make predictions using processed data. Samaritan renders this data much better than the Machine. The subject’s profile bears more details such as their identity, current latitude and longitudinal location, mobile phone IMEI, ‘observations, conclusions and recommended course of action’. Just like the Machine, it identifies assets and other important individuals who are assigned unique info-cards that bear their function, SSN and authority (Pedia of Interest 2014). An individual’s identity is compared against their consumption of internet and media content, medical history, relationships and even mobile application data. Their daily routes and routines are also deduced and even predicted, and deviations easily detected. Unlike the Machine, Samaritan is able to match a person’s biometrics and gait. It is able to recover and restore deleted data like surveillance footage and is quite capable of cyber-warfare as well (Pedia of Interest 2014). Samaritan is also capable of prioritizing its mandates as Dominant and Auxiliary. The Dominant Mandate is to uphold national security while the Auxiliary Mandate is to preserve and protect itself by neutralizing threats to its survival. The Primary Operations are goals that are not classified as mandates. Similar to the Machine, it can also recruit human assets and agents to do its bidding. It is also able to manipulate situations and subjects for its own purposes. Quite like the Machine, it is capable of upgrading itself with new algorithms and is also able to establish business entities to further its purposes (Pedia of Interest 2014). Critique/Evaluation of the extent to which the capacity of the science-fiction AI is achievable in the real world in the near future While it may seem like these two AIs could not be real, they have common features with the real world AI. One of the greatest parallels is the collusion between government and private institutions. In the real world companies like Google and Facebook cooperate with government to reveal private user data, or are compelled by the court to do so. In fiction as depicted, a private firm, Decima Technologies, gets contracted by the government to help tackle terrorism using its Samaritan AI. Another similarity is that there is a level of interaction between the AIs and humans. However, the Machine pushes this interaction further by directly interfacing with a human. The real-world AI – PRISM, and the fictional AI – Samaritan and the Machine, all handle the same types of user data. However, the data type and ranges collected by PRISM is selected by NSA analysts. The Machine and Samaritan collect data autonomously, independent of any human input. The data collected by PRISM must be returned to humans for further analysis before being disseminated. On the other hand, Samaritan analyzes the data and issues a social security number of a person of interest. This person can then be pursued and interrogated by humans. Samaritan goes as far as giving a recommended course of action like apprehending a deviant or eliminating (killing) a threat. While there are marked differences in the AIs, the real world AI will evolve to be more autonomous in the next ten to fifteen years. As knowledge-base and data-base systems increasingly work together, AI will get smarter, to the extent of even replacing some human aspects of problem-solving. Advances in technology like holograms, touch surfaces, gestures and eye tech-wear will make AI-human interaction almost as real as human-human interaction. References Gellman, B and Lindeman, T 2013, ‘Inner workings of a top-secret spy program’, The Washington Post, 29 June, 2013, viewed 7 December 2104, http://apps.washingtonpost.com. Gellman, B and Poitras, L 2013, ‘U.S., British intelligence mining data from nine U.S. Internet companies in broad secret program’ The Washington Post, 7 June, 2013, viewed 7 December 2104, http://apps.washingtonpost.com. Honavar, V & Uhr, L (Ed) 1994, Artificial Intelligence and Neural Networks: Steps Toward Principled Integration, Academic Press, New York, NY. Jones, MT 2008, Artificial Intelligence, Jones & Bartlett Learning, Burlington, MA. Li, D & Du, Y 2007, Artificial Intelligence with Uncertainty, CRC Press, LLC Boca Raton, Florida. ‘Pedia of Interest the encyclopedia dedicated to Person of Interest’ n.d., Wikia, wikia articles, viewed December 7 2014, http://personofinterest.wikia.com/wiki/Person_of_Interest_Wiki Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(Artificial Intelligence Example | Topics and Well Written Essays - 2714 words, n.d.)
Artificial Intelligence Example | Topics and Well Written Essays - 2714 words. https://studentshare.org/logic-programming/2055200-artificial-intelligence
(Artificial Intelligence Example | Topics and Well Written Essays - 2714 Words)
Artificial Intelligence Example | Topics and Well Written Essays - 2714 Words. https://studentshare.org/logic-programming/2055200-artificial-intelligence.
“Artificial Intelligence Example | Topics and Well Written Essays - 2714 Words”. https://studentshare.org/logic-programming/2055200-artificial-intelligence.
  • Cited: 0 times

CHECK THESE SAMPLES OF Technical Specifics of AI in the Real World

Logistic Regression Classifier for the Churn Data

The academic paper published in 2010 or later discussing a real-life application of data mining or credit scoring is A comprehensive survey of data mining-based fraud detection research.... It was authored by Clifton Phua, Vincent Lee, Kate Smith, and Ross Gayler and published in the IMA Journal of Management MathematicsThe journal categories compare and summarize almost all the published technical and review articles on automated fraud detection within the last 10 years....
10 Pages (2500 words) Coursework

Kudler Strategic Plan Part II

Throughout the world civilized world people are now showing growing inclination for organic foods and concern for the environment.... ... ... ... Quality food supplies being the founding strategy of Kudlers', the company does not need any new strategy.... All it has to do is to rediscover its new strategy and pack it well and display it....
5 Pages (1250 words) Essay

System and databases

It is not necessary to give a detailed technical specification or precise costing.... Produce a report for Production department and owner (Mark Stone), containing the summary of questionnaires by computing the percentages for each answer choice on the questionnaire filled by delegates by end of every conference. ...
12 Pages (3000 words) Essay

Cross-Functional Integrated Enterprise Systems in the Business Use

The internet is a network of networked computers all over the world (Haag, Cummings & Dawkins 2000).... The paper "Cross-Functional Integrated Enterprise Systems in the Business Use" states that the aim is to bring the proposed system to life and place it in the organisation.... Key tasks involve programming or writing any necessary software....
6 Pages (1500 words) Research Paper

Is It Possible for Machines to Think: A Multi-Disciplinary Analysis

The objective of "Is It Possible for Machines to Think: A Multi-Disciplinary Analysis" paper is to find a conclusive answer to the question 'Can machines think?... by fixing a definition for the process of thinking in human beings and then applying it to the domain of machines.... .... ... ... Can thinking of human beings be defined?...
96 Pages (24000 words) Dissertation

Principals of Technical Communication and Design

The paper "Principals of technical Communication and Design" analyzes the degree to which methods of technical communication.... are effectively implemented in three published technical manuals: Flash 8: The Missing Manual, Adobe Photoshop 7.... This paper will begin with the statement that with the advent and explosion of the internet within mainstream contemporary society, the necessity of technical writers to produce highly readable and user-friendly instructional manuals and designs has never been greater....
12 Pages (3000 words) Literature review

The Effects of Water and Heat Variation on the Volume of Long-Grained and Short-Grained Rice

The paper "The Effects of Water and Heat Variation on the Volume of Long-Grained and Short-Grained Rice" is a perfect example of a management research paper.... The results of a screening experimental design were used to obtain the optimum temperatures and water needed to separately prepare a long-grained and short-grained rice meal....
12 Pages (3000 words) Research Paper

Multi-Criteria Decision Making

election of Alternatives: The selected alternatives have to be; available, real rather than ideal, comparable, feasible, and practical.... This paper ''Multi-Criteria Decision Making'' tells that it involves the activity of selecting between a set of several courses of action.... There may not always be a 'right' decision....
35 Pages (8750 words) Case Study
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us