We might be making inroads into Artificial Thinking

Discussion in 'Science and Technology' started by intrinsical, Dec 27, 2016.

  1. intrinsical

    intrinsical Commodore Commodore

    Joined:
    Mar 3, 2005
    Location:
    Singapore
    This won't be covered in the news because it's cutting edge AI research in Natural Language Processing that only happened in the past two years. And I need to stress that it potentially could end up being the wrong approach, but I don't think so which is why I'm typing it out.

    The short version is this: This is the first time we have figured out a way to represent the meaning behind human language sentences as a mathematical matrix. A matrix that contains the meaning of a sentence. One could almost say this matrix represents a single thought or idea.

    Right now we are not doing a lot with these sentence-level matrices of meaning, but as a artificial intelligence and natural language researcher I can't wait to see how researchers will extend upon this work. I'm pretty sure in the short term, we will be able to further combine these sentence-matrices into a larger matrix/tensor that represents the meaning of an entire paragraph or document.

    Perhaps we can manipulate these meaning-matrices, these "thoughts". Perhaps we can alter these thoughts in a consistent way. If we can combine small thoughts into bigger thoughts, perhaps we can do the reverse and split a big thoughts into smaller thoughts, then transform these thoughts back into words. This would represent a huge development as this would be the first time we have an algorithm that work on artificial thoughts, and perhaps our first delving into artificial thinking.



    Let me backtrack a little. Up till about 3-4 years ago, the AI technology that handle human languages could only do the simpler activities, such as predicting what you're going to type, identifying whether the tone of a paragraph is positive or negative, identifying potential answers for a question. All these activities may seem complex but in reality it could be achieved by simply counting words and the use of some relatively simple probability computation.

    For example in my research work, my AI algorithm from over a decade ago could answer a question like "Name opponents who George Foreman defeated" simply by searching for names of that appear in articles near mentions of "George Foreman" AND different permutations of "defeat". This requires nothing more than a big search engine and some relatively advanced word probability computation.

    The problem with this approach is that the algorithm doesn't actually understand human languages, and would produce the same set of answers to the question "Name opponents who defeated George Foreman". The algorithm only does a word by word analysis of the question, which allows it to identify "George Foreman" to be a name and "defeat" is somehow related to this name. However, the algorithm does not realize there's a huge difference between "George Foreman defeated" and "defeated George Foreman".

    This is why there are human language problems that we currently cannot solve well, such as translating between languages. To solve these more advanced problems, we needed a way to go beyond processing word by word. Somehow, we needed to transform individual words in a sentence/passage into meaning.

    Three years ago, we began making inroads starting with a technique called word2vec. Word2vec uses a relatively simple neural network to convert every unique word into a unique mathematical vector, like [0.1234, -0.0013, 0.7532, ..., -0.24612]. The numbers in the vector represents a coordinate in a high dimensional space. And this high dimensional space is structured by word similarity. So the coordinates for "water" and "liquid" would be located geometrically close to each other, where as "fire" and "ice" would be located farther apart. Such a representation would also embed quite a bit of common sense knowledge, so "dog" and "fetch" would lie closer than "dog" and "think". Every single word in the human language would have it's own unique coordinate, and we can easily compute how similar or different pairs of words are. However as advanced as Word2vec is, it still operated on a word by word basis and so it still cannot do complex stuff like translating languages.

    However, things started to change over the past two years when someone figured out how to use a more specialized type of Recurrent Neural Network made with LSTMs (Long-Short-Term Memory units) to mathematically combine the individual word2vec word-vectors into a single matrix.

    This is as far as the current research has achieved. It's not much but like I said, this is the first time we've ever been able to convert a sentence into a mathematical matrix that still contain most of the meaning in the original sentence. If we can learn to manipulate this matrix - by massaging it's meaning, by combining meanings and separating big meanings into smaller meanings, it could potentially be the first time we have achieved something akin to artificial thoughts. and the ability to artificially alter thoughts.
     
    Last edited: Dec 27, 2016
    lpetrich and Robert Maxwell like this.
  2. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    Does the matrix concept relate to semantic networks and other similar models? Also how is the origin chosen for the space? Is it arbitrary? By the way, is "Name opponents who George Foreman defeated" grammatically incorrect? I think it should be "Name opponents whom George Foreman defeated?" This is the sort of error that English speakers can accommodate but perhaps it might confuse an AI.
     
    Last edited: Dec 27, 2016
  3. intrinsical

    intrinsical Commodore Commodore

    Joined:
    Mar 3, 2005
    Location:
    Singapore
    They're quite different beasts. Semantic networks are graph structures that treat words a node in the graph and words only have rigid, manually defined relationships with other words. If the semantic network does not have an arc linking "King" to "Man", then the computer has no way of knowing a king must be male.

    The word-vectors assigned by word2vec is just a coordinate, so the meaning of a word is not rigidly defined. I could take a coordinates for a word like "King", pick a random direction (and in a high-dimensional space, there are lots of choices) and travel in a straight and encounter words like "sovereign", "ruler", "monarch". If I pick a different direction, I would encounter words like "prince", "duke", "lord". In fact, it is even possible to do vector arithmetic with words. So I could take the vector for "King", subtract "Man", add "Woman" and the resulting vector points close to the coordinates for "Queen". It suggests that word2vec is also learning some structured knowledge and arranging words in a structured manner in the high-dimensional space.

    The origin is probably very close to [0 0 0 0 0 0 ... 0], and it isn't chosen but comes naturally because of the maths involved and the number of dimensions involved.

    The original questions came from a 2005 experiment conducted by the Text Retrieval Conference. Even back then, all of the AI are driven by probability/statistics and very few used rigid formal logic. So the AI already knew that who and whom are highly likely to be synonyms and were not confounded.
     
    lpetrich and Robert Maxwell like this.
  4. intrinsical

    intrinsical Commodore Commodore

    Joined:
    Mar 3, 2005
    Location:
    Singapore
    I guess the key point I'm trying to make is that researchers are starting to realize that whatever the representation is used for meaning/thoughts, it has to be a representation that is modifiable. With vectors, matrices and tensors, there lies the potential that we can perhaps use some form of addition, dot and cross product to change the "thought", combine simpler thoughts into more complex ones, and split complex thoughts into simpler thoughts.

    If we figure out how to achieve this, it would very likely become a whole new school of AI.
     
    Robert Maxwell likes this.
  5. Robert Maxwell

    Robert Maxwell memelord Premium Member

    Joined:
    Jun 12, 2001
    Location:
    space
    I realized after reading this that I looked at word2vec a while back and didn't really understand it. Your explanation has helped a lot! That's really fascinating. I can already think of some applications to put to it myself.
     
  6. Dryson

    Dryson Commodore Commodore

    Joined:
    Apr 13, 2014
    Before all of the complex sentences that you are discussing that the AI needs to learn the base of learning for the AI is the will to survive.

    One way to create AI will to survive is to add a section to the AI programming that constantly check its power units and upload and download ports for damage. The more damage each of these systems and other systems would take would render the original programming at its end. Because the originally programming has syntax added to it to go into emergency mode and upload its original programming base and its acquired knowledge before its memory failed and its ability to transfer the data the AI would go into survival mode trying to upload its data and programming into a similar unit to ensure that its original programming and stored data was not lost.

    The base programming would be similar to a human writing down the accounts of their daily lives to ensure an accurate first person perspective was created.
     
  7. sojourner

    sojourner Admiral In Memoriam

    Joined:
    Sep 4, 2008
    Location:
    Just around the bend.
    mmmmmm, no.
     
  8. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    @Dryson makes an interesting point IMO. An AI that has no goals of its own, no emotion or feelings such as fear or empathy, and no way to course correct based on its memories of previous positive or negative experience is not an independent, mindful being when viewed from a human perspective. However, does an AI need to have any of those traits just to fulfil our need for willing servants, cheap and expendable labour, chatty companions, or sex toys? Probably not. In fact, it would be better that such entities do not suffer.
     
    Last edited: Jan 6, 2017
  9. intrinsical

    intrinsical Commodore Commodore

    Joined:
    Mar 3, 2005
    Location:
    Singapore
    It helps if you understand a bit more of how space is like in higher dimensions. I find this video to be a helpful primer in showing the properties of space in higher dimensions. Especially the part about how a unit-hypersphere occupies increasingly less space as you increase the number of dimensions.



    For the purpose of thinking machines, it suggests that we do not require an infinite-dimensional space to pack all the unique "thoughts" or "ideas". Rather, some arbitrarily large dimensional space would be sufficient.

    We're still very far from a "proper" representation of ideas and thoughts, but I'm excited that we're making in-roads!
     
    Robert Maxwell likes this.
  10. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    Does this thought/idea/concept space have anything in common with a Hilbert space, which has been used to explain why the human mind sees yellow light instead of a mixture of red and green light?

    https://en.wikipedia.org/wiki/Hilbert_space
     
  11. PurpleBuddha

    PurpleBuddha Rear Admiral Rear Admiral

    Joined:
    Apr 14, 2003
    Then again, Buddhists will tell you to spend your life trying to eliminate desire, which is the ultimate cause of suffering. Perhaps the first functional self aware AI will also be the first truly enlightened individual. :)
     
  12. intrinsical

    intrinsical Commodore Commodore

    Joined:
    Mar 3, 2005
    Location:
    Singapore
    Its just standard Euclidean space with hundreds of dimensions.
     
  13. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    True. It's still an AI, though, and probably way better at whatever you programmed it to do than any human intelligence would be under the same circumstances.

    Nope.

    And I'm also convinced that reproducing humanlike intelligence in AIs is not a neccesary step for them in the first place. It's a task you can assign to a very smart AI if you want to emulate the mind and/or memories of a particular human, but an AI capable of doing this would be capable of emulating just about anything.
     
    Asbo Zaprudder likes this.
  14. Dryson

    Dryson Commodore Commodore

    Joined:
    Apr 13, 2014
    Here how Will To Survive might possibly function for AI driven robots, synthetics or bio-synthetics.

    [​IMG]

    Actuation Points
    - Actuation points would the be joints in the human body such as the finger knuckles and wrist, elbow, shoulder,neck, hip, knee, ankle and toe knuckles. Each type of actuation point would have a range of motion that register in the AI's central processing core as functioning normally such as the Index Finger creating a two right angles relative to a sensor in the palm of the hand and a sensor in the Middle Finger. Try it and you will see that two right angles are created when you point the Index Finger towards the ground. Another right angle is created when the second actuation point on the Index Finger is instructed to function towards the same side elbow sensor.

    The sensors at each actuation point would allow for free movement up to a normal range of movement. If the signal from the actuation sensor to the range of motion sensor is broken due to the actuation sensor being broken because the finger was forced into a range of motion that broke the control arm that moves the finger information from the range of motion sensor would be sent to Deep Data Storage and Processing main core and then processed to determine an overall damage level percentage that would determine whether or not the AI should flee and transfer its data, simply flee and seek repair or remain and continue its current operation. The data would also be sent to a deep data storage system where a deep record of the time of day, temperature, humidity, terrain type and what the AI was doing at the time that the damage took place would kept for future reference. In the future if the AI encountered similar environmental patterns that caused the damage the memory of the event would be recalled into the central processing system where the AI would become nervous at experiencing the same damage again possibly.

    If two or more similar environmental patterns are recognized then a memory would be recalled that was most like the current pattern that was being recorded. Basically the system of the AI would be constantly recording and cross referencing its Deep Data Storage and Processing core for memory patterns concerning damage to its systems.

    Data Storage and Processing Systems
    - This system would contain the normal operating programs for each section of the AI's frame such as range of motion for actuation points as well as programs designed to cross index with other programs to ensure that range of motion programs at one actuation point did not over exert on another actuation point that might cause damage to the AI. The DSAPS would also control fluid and electrical power distribution throughout the AI's body. Each system of the AI would therefore have a sensor or sensor group to ensure that electrical power and fluids were transferring through the AI normally. If any abnormalities are discovered the report is sent to the DSAPS analyzed and then sent to the Deep Data Storage Systems for future memory cross referencing.

    Environmental sensors that include, altitude, temperature, wind velocity, humidity, atmospheric composition (would most likely be loaded up into the AI before being deployed to a planet in order to keep such a large processing system from being needed built into the AI) toxicity levels and terrain type would also be built into the DSAPS and distributed across the frame of the AI.

    WiFi Receivers and Emitters - Sends and receives data to and from WiFi points on the AI frame to help determine damage and situational awareness for the AI at a faster transfer rate.

    Deep Data Storage Systems - Whenever an abnormality is encountered by the AI, even the smallest abnormality such as the AI stepping on a rounded rock causing its range of control arm sensors to reach 50% their tensile strength before breaking the event is recorded and stored for future memory association.

    Power Supply Systems - These systems must be filled on a regular basis due to use. If any sensor registers a low level of fluid or electrical power then it translates into a percentage of damage that the Processor would determine as either life threatening, dangerous or contained but potentially threatening that would then result in the Processor determining which action to take.

    Example of Will to Survive.

    Suppose are Bio-Synthetic is out walking around and suddenly a gasket breaks in several of the finger actuation joints causing fluid to leak into the hand. The Processor would instantly know that fluid pressure was being lost from other sections of the body causing the AI to go into a state of panic determined by fluid not passing through check points as well as fluid pressure sensor points not recording any pressure. Depending on the amount of fluid not passing through check points and fluid pressure sensor points would determine how the AI reacted to either seek maintenance right away, or seek a data download port on another AI unit or storage beacon to be later retrieved and re-inserted into a new AI frame if the leak was so severe that the AI might lose the ability to keep its Data Core's and Processing Systems operation.

    All AI programming would have a Bleach Program built into it that if 85% the systems of the AI are 50% damaged that the program would bleach all processing and data cores completely. A program tied into the bleach program would ensure that the AI remained in state of panic when damaged to avoid the Bleach Program being activated that would erase all of the memory of the AI. The Will To Survive program would therefore operate the AI normally while cross referencing the Deep Data Storage System to avoid potentially hazardous situations or to at least evaluate a situation to keep from reaching the switch that would access the Bleach Program.
     
  15. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    I agree, although I suspect truly smart AI will require, as part of its hardware inventory, a quantum computer with more than a mere handful of qubits. I believe Penrose is partially correct in that the brain uses quantum processes, even if he is otherwise completely wrong in the conclusion that he derives that AI is an unachievable goal.

    https://www.elsevier.com/about/pres...roversial-20-year-old-theory-of-consciousness

    I would also trot out the old Dijkstra "submarine" quote but it's become a bit of a cliche.
     
  16. Robert Maxwell

    Robert Maxwell memelord Premium Member

    Joined:
    Jun 12, 2001
    Location:
    space
    Wow, that sure is an obnoxious press release. I notice they make a bizarre leap from vibrating microtubules (which it sounds like they've confirmed exist and serve some function) to babble about consciousness and spirituality.
     
  17. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    Yeah, it's much more propaganda than I like to witness in such statements and it wanders into some apparently 1970's weirdness. Penrose has a number of hypotheses that he seems to get accepted in mainstream consideration before he leaves us -- state vector reduction initiated by Gravity at the Planck mass (22 micrograms), Conformal Cyclic Cosmology, Twistor theory, and all the microtubule associated stuff, not to mention his take on Platonism. However, he is not a fraud. While I might not agree with all of his ideas, whenever I've seen him present them publicly, he is always interesting.
     
  18. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    Depends on how you define "truly smart." Quite a few modern expert systems already fit that definition for most practical purposes.

    It doesn't. Nerve firing and synapses are mainly chemical processes with an electronic (ionic) component. The human brain is not a computer, and does not function like one, so saying the brain "uses quantum processes" is a bit like saying a banana split contains a hexidecimal flavor.

    I'm seeing more and more lately that science media is inundated with people who don't really understand how quantum mechanics works but still think it's interesting enough to associate it with all kinds of random shit they understand even less. It's like "We don't really understand the link between neural activity and consciousness... we don't really understand quantum mechanics either. Ergo, consciousness has something to do with quantum mechanics. Let's do a press release!"
     
    { Emilia } likes this.
  19. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    I believe Penrose understands quantum mechanics better than anyone who frequents this forum. The quantum mechanical underpinning of biological processes is a growing field of research and should not be dismissed lightly.





    Whether the brain is a computer depends on how you define "computer" and perhaps also "brain".
     
  20. Robert Maxwell

    Robert Maxwell memelord Premium Member

    Joined:
    Jun 12, 2001
    Location:
    space
    If the brain was a computer, we wouldn't have needed to invent computers.
     
    Crazy Eddie likes this.