Skip to main content

News

Topic: Google's Machine Learning (Read 3152 times) previous topic - next topic

0 Members and 1 Guest are viewing this topic.
Google's Machine Learning
http://blogs.wsj.com/digits/2015/06/26/artificial-intelligence-machine-gets-testy-with-its-programmers/?mg=blogs-wsj&url=http%253A%252F%252Fblogs.wsj.com%252Fdigits%252F2015%252F06%252F26%252Fartificial-intelligence-machine-gets-testy-with-its-programmers

I don't find this very eerie at all. The machine is still generating its responses from human input. To anthropomorphize it, it's just acting like that to fit in, because everyone else does it. It's similar to if I made a terminal emulator that printed "I don't want to" instead of executing commands.

So I began to think about it, and here are my rambling thoughts about anthropomorphic machine learning.

It makes me think, it would be more meaningful to me to see that kind of response come solely from non-human input. Depending on your view of the universe, that's how human (and in general all biological) behaviour evolved. It makes me think, something like the hardware random-number-generators some of old servers have would be good input. Since they produce output based on radioactive decay, using it as input for things like RNG's is totally fine. It's untainted by human hands, and unpredictable by human means. It could be considered a totally mechanical input, in a sort of romantic way.

It would lead to output we couldn't immediately understand, but generated with similar rules, something in some ways meaningless. Which makes me think...it's kind of like how droids talk in Star Wars. Maybe that's a good representation of where machine-learning with most input only from other machines would go. I did a little experiment a bit like that a long time ago using simple chatterbots, to see what would happen if I fed the input from one into the input from another. Of course, it became a very repetitive and meaningless. But it makes me imagine, in a case sort of like Star Wars, if the machines did 99% of their communication with each other and only 1% with humans, it could lead to small changes that keep a machine language working (especially with a large enough group 'talking' together). Maybe sort of like how human speech has onomatopoeic sounds. Just something taken from totally outside where it used, like a sound of the natural world used in human speech.

But it would be interesting to see where totally machine-to-machine based learning could lead. If you want some kind of self awareness, it's boring for a machine "learns" to act human. It would be interesting for a machine to learn to be a machine in that same manner.

  • Fat Cerberus
  • [*][*][*][*][*]
  • Global Moderator
  • Sphere Developer
Re: Google's Machine Learning
Reply #1
This makes me think of the cleverbot-to-cleverbot chats people set up, they are interesting.  Two machines talking to each other using a corpus/data borne of human interaction.
neoSphere 5.9.2 - neoSphere engine - Cell compiler - SSj debugger
forum thread | on GitHub