Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.

So, IBM Has an AI (Artificial General Intelligence)

20 posts / 0 new
Last post
Dry Observer Dry Observer's picture
So, IBM Has an AI (Artificial General Intelligence)
From my blog, [url=http://futureimperative.blogspot.com]Future Imperative[/url] [url=http://www.nytimes.com/2011/02/06/opinion/06powers.html?_r=1&scp=1&sq=%2... is pitting Watson[/url], a de facto artificial intelligence the size of an RV, with a library of 200 million pages of assimilated information, against the top two Jeopardy champions of all time. Whether or not their supercomputer -- using 2,500 processing cores capable of handling 33 million operations per second each -- actually wins, the very fact that an AI now exists which is capable of answering Jeopardy questions is a turning point in the quest for advanced, human-equivalent and human-superior artificial intelligence. The key is ambiguity and specificity. Jeopardy "questions" are known for throwing in odd and offbeat humor and metaphors while demanding that contestants sort through that in an average of 3 seconds and come back with the specific answer required (phrased as a question). Whether Watson succeeds or fails, IBM will have produced more than a search engine, but an AI capable of sifting through vast data archives to draw out not just a meaningful answer, but the answer, in context, that the user is looking for. A user (Alex Trebek) who will apparently be interfacing by way of verbalized questions. If this is possible, and clearly it is, Watson will provide governments and research centers the ability to sift data automatically, with an AI not merely capable of "running a search," as Google's algorithms accomplish, but thinking about the question and weighing potential answers -- all in the space of a second or less, and all without requiring a live human to oversee the process, but only its end result. Consider this fact in the larger context. We already have two computers doing scientific research -- one which sifts through medical journals looking for secondary uses of pharmaceuticals and their analogues. The other can take the genome of a simple animal such as a nematode and sort through its genes one-by-one, conceiving, designing and performing experiments to test the properties of each one of thousands of genes, handling the process effectively all by itself until the task is done. Meanwhile, we have Google's mighty search engine, an Android app which can make reservations at some restaurants at the imprecise verbal command of its owner, and a host of publicly developed apps for the iPhone. What we have, in effect, is a rapidly evolving version of what the science-fiction game Eclipse Phase would call a "Muse," a basic artificial intelligence that is with its owner from its earliest years, and which handles essentially all of its meaningless digital "paperwork" and other tedious jobs. But between the emergence of that technology and [url=http://www-03.ibm.com/innovation/us/watson/what-is-watson/index.html]Wat..., we have all the tremendous strides forward in between. Given that we already have computers doing real scientific work and now capable of searching the world's collective memory, those steps are apt to be substantial indeed. Already, a bit of crowdsourcing apps and off-the-shelf technology might make a primitive Muse possible, and of course computers are already adept at operating within the artificially limited parameters of the Web. AI advancing at this pace will have tremendous impacts on employment, shifting career and business opportunities, technological development and other issues I will go into more extensively on this blog. But I leave you with one thought... Formidable as this technology is, how much more potent does it become in the hands of extraordinarily gifted human beings, who are working to tap their own fullest potential in terms of not only the machines they use, but the skills they practice, the personal enhancements they embrace, and the lives they live?

-

Dry Observer Dry Observer's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
Oh, I just corrected the erroneous link for my blog, above, which led back to the New York Times for some reason. So... Any thoughts, anyone?

-

Axel the Chimeric Axel the Chimeric's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
I've got to admit, I'd love to see this show, but it's left me philosophical. I can't find myself excited that this is a step towards an AGI because I merely see a more sophisticated illusion over a face of what is ultimately a machine. I want to see an adaptive learning mechanism. Something that is ultimately unintelligent at first, but can perceive, remember, and learn. A program that can absorb information, but also receive "rewards" and "punishments" for performing certain behaviours. Watson's an amazing step for software, but is it a step towards an AGI? I don't think so. I could be wrong. It'd be nice if I am. However, for now, this is merely another nifty tool in my eyes.
TBRMInsanity TBRMInsanity's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
There will be many small steps before the first true AGI appears. In fact I don't think we will know when the first true AGI will appear because we will be conditioned for years with "dumb" Human-Computer interfaces that a true AGI would appear the same (only faster and better).
Jovian Motto: Your mind is original. Preserve it. Your body is a temple. Maintain it. Immortality is an illusion. Forget it.
nezumi.hebereke nezumi.hebereke's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
I think just about everything is a 'step towards'. Smaller toes were a step towards human sentience. Perhaps as importantly, it's a step towards cultural acceptance. This isn't a robot with laser blasters that will conquer and rule its human competitors (could someone verify that for me?) It's just another neat tool, another way that computers are making our lives more interesting.
root root's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
root@IBM has an AGI [hr] The fun part about AI work is that the definition for AGI shifts every time something new is shown to be doable with current technology. What was considered evidence of intelligence shifted right after Deep Blue beat Kasparov, and it will shift again when Watson or something like it beats champions at Jeopardy.
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
nick012000 nick012000's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
[url=http://www.youtube.com/user/Rashad8821#p/u/]Here[/url] are the first two episodes of its appearance on Jeopardy; it absolutely slaughtered them.

+1 r-Rep , +1 @-rep

root root's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
root@IBM has an AGI [hr]
nick012000 wrote:
[url=http://www.youtube.com/user/Rashad8821#p/u/]Here[/url] are the first two episodes of its appearance on Jeopardy; it absolutely slaughtered them.
Sometimes I have an almost uncontrollable urge to smash looms.
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
Dry Observer Dry Observer's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
The previous report on IBM's de facto AI, Watson, generated a few direct and indirect responses worth mentioning. First, The New York Times followed up the article linked in my last post with an analysis which actually acknowledged Watson is a more or less a basic AI. Whether or not this admission was a response to this blog's comments, it's interesting to see a major news organization noting this fact. The Times writer also discussed the race between artificial intelligence and intelligence augmentation (AI versus IA). While seeing these issues discussed in the public square was undoubtedly interesting, I'm afraid the quick overview provided may have an in fact understated what is going right now, in terms of these two complementary technological paths, and also the other resource and competitive pressures being brought to bear not just on the computer industry, but on the entire human race. I will go into these subjects more thoroughly in this blog in the near future, but for now, let me clarify. As the Times notes, AI in Watson's league is a step towards eliminating a lot of expert consultation, at least when it comes to asking questions with relatively straightforward answers. But the greatest insights usually involve far more than just a rote answer. Knowledge certainly plays a role, and a great deal of that information can and has been assimilated into books or databases. A gifted professional may well be bringing together sensory information, "gut instinct" and a wealth of information that comes from having lived in the real world and having a deep understanding of it. Some symptoms picked up by an attentive doctor, for example, may be easily checked by a machine -- blood pressure, pupil dilation, heart rate. Others, such as subtle psychological cues, or insights made possible by a long familiarity with someone's personality and general lifestyle, may prove much harder. Truly major discoveries, on the other hand, could be much more difficult for an unassisted, simple AI to accomplish. Something like Einstein's theories of relativity were not merely a shift in the paradigm of physics -- in order to find the answers you had to understand that the questions even existed in the first place. You might read the above qualifiers and think that I am now minimizing the impact of a Watson. Actually, no. The ability to give meaningful, even expert-level answers in a second or two when asked a question, especially when sifting vast databases, is potentially an immense change in itself. Consider. If Watson is that capable of understanding murky questions and responding to them accurately, then a scientist or inventor could ask for information on a whole host of questions and then receive accurate and almost immediate replies. In effect, a huge part of the trivial, mentally draining, unrewarding and unprofitable work that an elite research team has to do... goes away. Or is rather handled speedily and invisibly by the machine in question. But this is only the beginning. As I noted previously, we already have two computers out there doing research -- sifting articles for secondary drug uses and analogues in one case, and determining the effect of each gene in a simple animal's genome in the other. Meanwhile, there are other powerful means of coming up with scientific discoveries or technological innovations, such as evolutionary algorithms. Because Watson can understand and act upon generalized questions, it's likely that soon an AI will be able to understand and act upon other orders, such as a command to begin a new line of basic research, to focus its drug searches on specific diseases or enhancements, or to find the ideal design (evolved from its algorithms) for a specific piece of technology. It's not just whether you can dispense with hiring a new expert for your team; it's that you may have gotten a specific, seemingly major task done in a matter of hours, minutes, or seconds because your automated systems were able to understand exactly what you were asking for. The expertise you can tap in simply searching for existing answers can be equally formidable for decision makers. A country facing shortages in its food supplies could, in a matter of moments, draw up information on foragable plants, various ways to produce more food (especially fast, high-yield and/or cheap methods), and what organizations might be willing to supply significant food (for free, for an acceptable cost, or for barter (wheat for oil, or what-have-you)). These searches might miss some options (the roots of those daylily plantings lining your highways, even in the dead of winter, for example), but at least leaders would no longer be at the mercy of the information and prejudices of the experts they have on hand. And who, really, has a host of top-notch professionals on hand in every field, for every question? This kind of breadth and speed could critically improve decision making. But, once again, it's not only the answers you get, but what questions you ask in the first place. For instance, many researchers in the fields of human enhancement and human augmentation -- the study of how to help people be smarter, healthier and otherwise "better than normal" -- seem oblivious to related, complementary developments in sub-fields other than their own and perhaps one or two others. Even though, ironically, some of that work is going on in extremely well-established disciplines. So an AI researcher might keep up with cybernetics, especially work on human-computer interfaces, and yet be unaware of much more mature fields such as nootropic drugs and nutrients, biofeedback, cranial electro-stimulation, self-hypnosis, accelerated learning and sensory-deprivation tanks... or even the full benefits of better nutrition and cardiovascular exercise or the damage caused by sleep deprivation and stress. Or quite a few other interesting discoveries. Lacking this knowledge can lead to some odd missteps. When last polled, for example, 20% of American scientists admitted to using a drug to improve their thinking... and remarkably, the two main pharmaceuticals employed were Ritalin and Adderall -- two substances with limited uses and well-established and often dangerous side effects. With relatively safe nootropics like Piracetam and general alertness-enhancers such as Modafinil available, seeing that many of America's scientists making such a questionable choice is surprising. Then again, the Times' followup article on Watson in describing intelligence augmentation speaks exclusively of useful software that can assist elite scientists and engineers, rather than the more formidable option of directly improving the intelligence, learning ability and creativity of the researchers themselves. In fairness, the full extent of intelligence-augmentation experimentation may have been beyond the parameters of the piece, but more frequently this oversight results from sources who are themselves unaware of dramatic progress. Sources who may also harbor prejudices against specific lines of research, such as a dry nanotech or AI triumphalist who feels an unspoken contempt for biological or psychological augmentations. Once again, it's a matter of the questions you ask, and how frequently you ask them, not just the quality of the information available to you. Our greatest discoveries are often made at the borders of our ignorance, not just at the pinnacles of our understanding. Which brings us to an interesting twist to all of the above. There are many forces driving dramatic change in the world today. Some of these are new technologies, new opportunities and new competitive forces, but others are grave challenges that are coming fast. There are really too many of these factors to sum up here, so I will only touch on a few. The intense competition of the computer industry is in many ways the quest not so much to dominate a long-standing market but to create new ones. If you look at some of the major products and/or companies to have emerged in the last two decades, you see Amazon and eBay, Google, the iMac, iPad and iPhone, the Droid smartphone, the crowdsourced software "apps" for smartphones, a host of open-source software (Mozilla, Ubuntu, Python), Playstation and XBox, blogging, e-readers such as Kindle, Facebook and Twitter, and, of course, IBM's Watson. And more. Quite a few of these innovations were sneered at, yet computer games' revenues now exceed those of the U.S. film industry, and Facebook and Twitter have been used by enterprising, educated young people as the organizational means to overthrow two Middle Eastern governments. Further, the ability to cooperate and compete over the Internet and throughout global markets, and to exchange software based "goods" in seconds, has sharpened these competitive pressures. To return to the above list: How many of those innovations were the work of companies that were either viewed to be fading or on life support, or which had only just come into existence? But that furious commercial battle is only one tiny part of the larger picture. The debate about whether to pursue intelligence augmentation versus artificial intelligence has for a long time missed the point... Right now, we already have intelligence augmentation, and brilliant human minds that can use it. Our computational breakthroughs, whether AI-related or not, have thus far been most spectacular at advancing research into enhancing humans -- whether by decoding the human genome, assaying new nootropic drugs, scanning the mind with improved MRIs and algorithms, putting the world's scientific journals online in searchable formats, and so on. Granted, it helps that just about any medical research is "dual-use"; as virtually any medical advance can be applied to enhancing some aspect of the human condition. Alzheimer's research equals memory enhancement, intelligence enhancement, and nootropic and longevity research. Parkinson's research equals intelligence and nootropic and longevity research. Artificial limbs mean cybernetic advancements. Repairing brain damage means advances relating to cybernetics, intelligence augmentation and artificial intelligence. And so on and so forth. And, of course, much of this work is not only a theoretical augmentation. Merely keeping existing, brilliant minds functioning at their best for a few more years effectively augments global scientific and technical research. Now imagine how much more could be done to assist those minds directly. In other words, the scientific and technical competition existing in any number of "hot" fields, and quite a few complacent ones, could be intensified simply by augmenting the intelligence and creativity of their leading researchers. Clearly if, as of several years ago, one fifth of American scientists were taking some kind of drug to amplify their intellect, then this transformation is already underway. The other half of this changing competitive picture are all of the new people competing -- both a broader slice of the public in countries with established tech industries, and throughout the world. Whether open-source programmers on Linux, hobbyists providing apps for iPhones, or startups emerging out of nowhere, the host of new minds involved in solving problems and/or creating new products is staggering. Now, imagine if all, or even a majority, of those minds could be radically augmented in terms of their gifts, and empowered in terms of the knowledge and resources they could tap and the ease with which they could bring products and companies into existence. But perilous changes are also taking place in our world, which provide their own kind of "competitive challenge." The world consumes over a cubic mile of oil a year, and vast quantities of natural gas and coal. Those supplies are not only limited, but the energy required to find, extract, refine and ship them to market, particularly in the case of oil, have been steadily increasing over the last century. And our production of oil is almost certainly near, at or just past our ultimate global peak in overall production by volume (and probably well past in terms of net energy). Rising energy prices feed through into everything, particularly in oil's case, as it happens to be a feedstock in a huge number of products, in particular almost anything made out of plastic. Rising prices and/or falling profits for virtually all goods and services puts financial pressures on everything, which is bad news in a global economic downturn as severe as this one. Meanwhile, climate change is well underway. Some of those dire impacts supposed pessimists felt could happen in just a decade or two, such as disruptions to our food production, may already be here. Severe drought in Russia, parts of China and India and in western Australia, severe flooding in Pakistan and eastern Australia, and very hard frosts and ice storms in Mexico, southern China and some localities in the U.S. will almost certainly damage global food supplies in 2011. In countries where the average household spends 40% to 50% of their income on food, doubling food prices means economic ruin if not starvation for many, many people. We should not be surprised that dramatically higher food prices have helped drive revolutions in the Middle East. Nothing makes people believe in change like seeing the end coming. But ironically, being driven to the wall may prove to be our greatest evolutionary hope. When you no longer have any excuses, delusions or options, you have no choice but to change. That change may be for good or for ill, but at some point it becomes inevitable. Our mission, then, is to make the best choices we can with the information and opportunities before us, and to help provide better alternatives to others, so that when they are forced to leap headfirst into change, they choose to leap in the wisest direction.

-

Dry Observer Dry Observer's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
Yes, I've noticed the changing AI definitions, too, Root. As I've said, we have too different computers doing specific types of scientific research on their own, now. To be blunt, I think we're progressing towards Strong AI, or at least Stronger AI, piecemeal.

-

Decivre Decivre's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
I think everyone should take a step back, for a second. IBM themselves have acknowledge that this is not a thinking software of any sort. Watson was designed to be able to filter and understand natural language, and parse it in a way that allows the system to query a database. That is not a thinking machine, but rather a question/answer parsing system. There is a wide difference. Second, I don't really consider Watson to be the winner of that contest. It had an unfair advantage in many respects, most obvious in the fact that it was digitally pressing the buzzer when it was time to response. If Watson calculated its answer before the players were able to buzz in, it was incapable of buzzing in before the question finishes and would always buzz in 4 milliseconds after the question was finished, which means it was immune to being frozen out of answering and had a vast speed advantage. If they really wanted to test out its capabilities as a Jeopardy contestant, they would have allowed players to buzz in at any time, and transmitted the question to Watson at the same time they revealed it to players, allowing them to test out Watson's speed at parsing and answering the question versus a human's capability to do so. They did not. Instead, this was simply an exhibition of how far their natural language databasing system has come. The other players didn't really have a chance because of Watson's clear speed advantage. So again, this is not a leap forward in AI, but a leap forward in databasing and natural language interfacing... but that doesn't mean this isn't an amazing thing. Personally, I think that natural language interfacing and databasing is a far more important element in computing than artificial intelligence is. AI basically overlaps with the natural intelligence we already have. On the other hand, this system potentially assists people in sifting through vast amounts of data in order to find the information and answers they seek in a far smaller amount of time. Imagine this: a nurse types up a patient's symptom report into a system and runs it through a database to find out the likely problem, which is immediately sent to a nearby doctor... which is far quicker than him skimming through medical books for the answer. Imagine an internet search engine that you can ask actual questions of, rather than speaking unnaturally with individual search terms. Finally (and most importantly, in my opinion) imagine a chat program like Yahoo Messenger that is capable of translating anything that is typed into whatever language the receiver speaks. In a way that they can naturally understand. All on the fly. It may not be a leap in artificial intelligence, but this is potentially a leap for us currently-existing intelligences. We may be looking at the first steps towards an instinctive computer interface... one that takes minimal effort to master, and anyone is capable of utilizing. A natural language parser is just a couple steps away from a neural interface.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
nezumi.hebereke nezumi.hebereke's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
Decivre wrote:
So again, this is not a leap forward in AI, but a leap forward in databasing and natural language interfacing...
I don't think these two are mutually exclusive.
Decivre Decivre's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
nezumi.hebereke wrote:
Decivre wrote:
So again, this is not a leap forward in AI, but a leap forward in databasing and natural language interfacing...
I don't think these two are mutually exclusive.
Natural language interfacing and databasing are certainly things that an artificial intelligence can utilize... but it's something that any intelligence could utilize. By that logic, nearly every single piece of software can be looked at as a "step towards AI", which I think would be a ludicrous claim.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
nezumi.hebereke nezumi.hebereke's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
I guess I don't see how something true can be 'ludicrous'. (I don't agree that every software advance is a step towards artificial intelligence, but very many are. This is bigger than most. Just check out how much of psychology is dedicated to the acquisition and understanding of language and information.)
Decivre Decivre's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
nezumi.hebereke wrote:
I guess I don't see how something true can be 'ludicrous'. (I don't agree that every software advance is a step towards artificial intelligence, but very many are. This is bigger than most. Just check out how much of psychology is dedicated to the acquisition and understanding of language and information.)
The difference here is that Watson isn't designed to "understand" language and information. It's designed to parse a natural language statement into a machine-readable database query. There is a difference between the two. It'd be like saying that the Rock Band videogames "understand" music when they score you at the singing portion (they don't... they recognize tonality and compare it to the stored tone record to see how close you are to the song from the original master). It would be like saying that Amazon's website "understands" your purchasing tastes when it recommends a product (it doesn't... it cross-compares your purchasing records, wishlist and other information from your account against the purchasing records of other customers, sees what purchases you have in common, and what purchases each of you lack, then recommends them accordingly). It will vastly improve the way our machines can understand user input, but I don't see how this will be any sort of improvement towards AI. Even IBM has referenced this; they plan to use it in hospitals and corporations where people can use natural language to query databases, and later in consumer-level computers for more advanced search engines and perhaps for future input systems when voice recognition research improves. These technologies can only really help an AI in the same way they help us... by allowing it to interface with "dumb" computers with natural language.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
nezumi.hebereke nezumi.hebereke's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
But babies do learn to "understand" language and their world as they learn to recognize natural language. The development of one for computers paves the way for the other, even if it's only in that we are creating larger, more efficient data-processing methods. I feel though that we aren't really arguing against each other. A jellyfish may not seem like a revolution in intelligence (in that it still doesn't have any), but it is a critical step between paramecium and complex vertebrates in the course of evolution, so in that regard, yes, it is a huge revolution.
Decivre Decivre's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
nezumi.hebereke wrote:
But babies do learn to "understand" language and their world as they learn to recognize natural language. The development of one for computers paves the way for the other, even if it's only in that we are creating larger, more efficient data-processing methods. I feel though that we aren't really arguing against each other. A jellyfish may not seem like a revolution in intelligence (in that it still doesn't have any), but it is a critical step between paramecium and complex vertebrates in the course of evolution, so in that regard, yes, it is a huge revolution.
Babies don't understand language because of an internalized database, nor do convert natural language into machine code. Babies learn language by memorizing sound functions through mimicry, and inferring language rules or syntax through observation and repetition. Watson does neither of these things (it worked off a static database, could not learn, and according to IBM, had to convert any question into a logical form before it could even try to answer it (most analysis of natural intelligence dictates that our minds might not even look at things logically, without training ourselves to). Look up open domain question-answering to get some more information on what Watson really did. Using natural language input to produce natural language output, at least to me, is no more "intelligent" than typing numbers into a calculator and getting an answer (humans can do math as well)... at least not until that computer understands why it is answering the question that way. That doesn't make it any less useful to research, however. I agree to your jellyfish analogy, but counter-argue that computers do not evolve, and therefore this element might not be a necessary step to producing an intelligence. But to play devil's advocate to myself, we don't necessarily know. Technology is such that we can't tell how it will be utilized in the future so easily. No one could have predicted web 2.0 and social networks in the 90s. I may very well be wrong.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Axel the Chimeric Axel the Chimeric's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
I have to wonder if there's some way to create an evolutionary system of sorts; one that takes input, mimics it, and learns based on a response. A machine that actively learns but begins tabula rasa; it accepts information and adapts, organizing it into self-created groups as a result of positive and negative learning standards. It's a very ground-up way to build an AI, since the end result would be very different from what you started with. Would make it a tad unpredictable and uncertain too, especially if you wanted to alter it later, but that's always the risk you run. This sort of thing would require very good script and a lot of time and patience, since it'd be like raising an exceptionally dumb baby that has no innate ability to parse language. The end result would be a fully intelligent AI, though.
Decivre Decivre's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
Axel the Chimeric wrote:
I have to wonder if there's some way to create an evolutionary system of sorts; one that takes input, mimics it, and learns based on a response. A machine that actively learns but begins tabula rasa; it accepts information and adapts, organizing it into self-created groups as a result of positive and negative learning standards. It's a very ground-up way to build an AI, since the end result would be very different from what you started with. Would make it a tad unpredictable and uncertain too, especially if you wanted to alter it later, but that's always the risk you run. This sort of thing would require very good script and a lot of time and patience, since it'd be like raising an exceptionally dumb baby that has no innate ability to parse language. The end result would be a fully intelligent AI, though.
The hardest part to that would be to create a program capable of abductive and inductive reasoning, rather than simply deductive reasoning. These other two conceptual forms of logic are much harder to quantify in concrete forms, because we don't really understand how they work in our minds, at least in a concrete way. Ironically, I think that the first major step to producing a true artificial intelligence is to decode the processes that go on within the mind of a natural intelligence. From there, creating AI is as simple as emulating those natural processes in a digital manner. Until then, we'll probably build things that only sum to learning software.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
nezumi.hebereke nezumi.hebereke's picture
Re: So, IBM Has an AI (Artificial General Intelligence)
They do that, mostly for simpler systems. The upper limit for intelligence is the number of 'mental' connections computers have. If you're tying bits to neurons, or processing speeds to, well, processing speeds (bearing in mind parallel computing), our top of the line computers are pushing the intelligence of locusts. So if you want to design a computer system as intelligent as a locust, that's the way to go. Until our hardware is a few orders of magnitude better, real AI is probably just about impossible.