And Now for the Robot Apocalypse…

07/28/201549 Comments

t800by James Corbett
July 28, 2015

Well, you can’t blame them for trying, can you?

Earlier today the grandiloquently named “Future of Life Institute” (FLI) announced an open letter on the subject of ‘autonomous weapons.’ In case you’re not keeping up with artificial intelligence research, that means weapons that seek and engage targets all by themselves. While this sounds fanciful to the uninformed, it is in fact a dystopian nightmare that, thanks to startling innovations in robotics and artificial intelligence by various DARPA-connected research projects, is fast becoming a reality. Heck, people are already customizing their own multirotor drones to fire handguns; just slap some AI on that and call it Skynet.

Indeed, as anyone who has seen Robocop, Terminator, Bladerunner or a billion other sci-fi fantasies will know, gun-wielding, self-directed robots are not to be hailed as just another rung on the ladder of technical progress. But for those who are still confused on this matter, the FLI open letter helpfully elaborates: “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.” In other words, instead of “autonomous weapons” we might get the point across more clearly if we just call them for what they are: soulless killing machines. (But then we might risk confusing them with the psychopaths at the RAND Corporation or the psychopaths on the Joint Chiefs of Staff or the psychopaths in the CIA or the psychopaths in the White House…)

In order to confront this pending apocalypse, the fearless men and women at the FLI have bravely stepped up to the plate and…written a polite letter to ask governments to think twice before developing these really effective, well-nigh unstoppable super weapons (pretty please). Well, as I say, you can’t blame them for trying, can you?

Well, yes. Actually you can. Not only is the letter a futile attempt to stop the psychopaths in charge from developing a better killing implement, it is a deliberate whitewashing of the problem.

According to FLI, the idea isn’t scary in and of itself, it isn’t scary because of the documented history of the warmongering politicians in the US and the other NATO countries, it isn’t scary because governments murdering their own citizens was the leading cause of unnatural death in the 20th century. No, it’s scary because “It will only be a matter of time until [autonomous weapons] appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.” If you thought the hysteria over Iran’s nuclear non-weapons program was off the charts, you ain’t seen nothing yet. Just wait till the neo-neocons get to claim that Assad or Putin or the enemy of the week is developing autonomous weapons!

singularityIn fact, the FLI doesn’t want to stop the deployment of AI on the battlefield at all. Quite the contrary. “There are many ways in which AI can make battlefields safer for humans” the letter says before adding that “AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so.” In fact, they’ve helpfully drafted a list of research priorities for study into the field of AI on the assumption that AI will be woven into the fabric of our society in the near future, from driverless cars and robots in the workforce to, yes, autonomous weapons.

So who is FLI and who signed this open letter. Oh, just Stephen Hawking, Elon Musk, Nick Bostrom and a host of Silicon Valley royalty and academic bigwigs. Naturally the letter is being plastered all over the media this week in what seems suspiciously like an advertising campaign for the machine takeover, with Bill Gates and Stephen Hawking and Elon Musk having already broached the subject in the past year, as well as the Channel Four drama Humans and a whole host of other cultural programming coming along to subtly indoctrinate us that this robot-dominated future is an inevitability. This includes extensive coverage of this topic in the MSM, including copious reports in outlets like The Guardian telling us how AI is going to merge with the “Internet of Things.” But don’t worry; it’s mostly harmless.

…or so they want us to believe. Of course what they don’t want to talk about in great detail is the nightmare vision of the technocractic agenda that these technologies (or their forerunners) are enabling and the transhumanist nightmare that this is ultimately leading us toward. That conversation is reserved for proponents of the Singularity like Ray Kurzweil and any attempts to point out the obvious problems with this idea are poo-pooed as “conspiracy theory.”

singularity2And so we have suspect organizations like the “Future of Life Institute” trying to steer the conversation on AI into how we can safely integrate these potentially killer robots into our future society even as the Hollywood programmers go overboard in steeping us in the idea. Meanwhile, those of us in the reality-based community get to watch this grand uncontrolled experiment with the future of our world unfold like the genetic engineering experiment and the geoengineering experiment.

What can be done about this AI / transhumanist / technocratic agenda? Is it possible to be derailed? Contained? Stopped altogether? How? Corbett Report members are invited to log in and leave their thoughts in the comment section below.




Filed in: Articles
Tagged with:

Comments (49)

Trackback URL | Comments RSS Feed

  1. erichard says:

    I do not share the AI fear that Musk and others have. AI will always lack self-consciousness and self-will– even if AI can be programmed to act as if it has these. There is no honest reason to think Humans will not always be able to pull the plug on AI if necessary. What needs to be feared is, as you say, the psychopaths that use disinformation, deception and lies to enslave us more and more, while acting as if they were our best friends.

    • archives2001 says:

      Spot on erichard, spot on!
      However, your statement about AI ALWAYS lacking self-consciousness??
      THAT’S a bit of a stretch!… A thousand yrs from now? Ten thousand???
      I’d reconsider that statement.

    • NES says:

      I agree heavily on the first point–NO FEAR–not even with regard to the psychopaths. Be aware. Do not fear. It only adds to negative energy and feeds the evildoers who want it. Positive, happy, smile, laugh, love, hop, skip and any other adjective you want to fill in starves them because these thoughts do not key on–FEAR or the negative.

      However, I’m not so sure the psychopaths can ever unplug the next quantum generation of computers once completely proofed, nor would they want to. If any of you would like to read what’s been happening and the reason I state this (with links from our illustrious lapdog government, their ancillary agencies, their buddies across the globe and associated alphabet agencies, etc.–all open source information gathered from their own official documents online) send me an email. I transcribed the original interview with the compiler for a friend. It was almost 3 hours, so you can skip the babble if you read it. The transcript is 23-pages–not short. It’s a must read of the psychopath/sociopath workings of the Cabal driven on an AI quantum platform that appears to be in place now. The Jade Helm 15 exercises also appear to be a boots-on-the-ground trial for the entire quantum chip driven system. Of course, it’s unlikely that the Spec Op participants have any knowledge of the real reason they are making war on American soil.; Subject Line: SEND me the Transcript.

      And I’ll do it.

  2. jeffmorgano says:

    In the military the phrase “comfort kills” makes the rounds as a not so gentle reminder to remain vigilant.

    The fears of uncontrollable AI spring forth from a place of misunderstanding of the actions and ethics of Military and Industry. Read more from John Robb.. his illustrates the true use of automation and assembly lines, which was not lost on the military leaders of our era. Automation works by compartmentalizing tasks, and making tasks simple and routine. Killing operations will not be outsourced in this fashion, simply because it threatens the chain of command.

  3. archives2001 says:

    Well, James,
    I just wonder what a humongous financial crash as bad or worse than 1929
    would do. How about an EMP, bio disaster, or nuclear holocaust?

    Not wishing any of these but it might me interesting how any one of
    these or multiple events could instantly transform our world.

    Just off hand, I’d say current events and prognostications from folks
    like Christine LaGarde, the French Foreign Minister/John Kerry, the Pope, Mark Biltz, Jonathan Cahn, Rickards,P Craig Roberts, Stansberry,
    Bill Bonner, are pointing to a major collapse in Sept or Oct.

  4. shopbruce says:

    An organized cyber attack on the AI system to reprogram “THE TARGET” might have a few of our friendly people that want to “help us” thinking with a bit more clarity! Like the old “MAD Comic Book” said: “What me worry?” The target must be re-defined.

    Instead of the masses worrying – would if the “elitists” had to worry? Think about it.

    Possibly the “conspiracy theory” can be reversed. It they believed that hack was possible – that’s a start…

  5. Alias says:


    As interesting as the question “what can be done” about the AI agenda might be, it could easily be directed at any number of other collectivist agenda as well, many of which may be equally or even more devastating. As I have mulled over the appropriate responses to their enslavement process for many years now, it has become apparent that there are limited choices. Number one, I believe that the only possible response with any measure of success must be a personal nullification of their aims to reduce ME to serfdom. I am not as much a pessimist as I am a realist, and I do not see much movement from humanity to save itself from the whims of psychopathic behavior AS A GROUP. I have worked dutifully for the past decade, and will continue to do so, with a goal of self-preservation by becoming as self-sufficient as humanly possible within the restrictions of limited resources – which in itself separates me from the NWO creeps who seem now to have unlimited amounts of money and influence. Nothing would please me more than to see aggressive action taken by an organized force of aware world citizens to combat the scum of sociopaths who believe they have some divine right of power and control over mankind. I simply do not favor waiting for that to happen, so I will take matters into my own hands, as I think it is everyone’s destiny to do anyway. I consider myself to be a religious person, but my beliefs do not correspond to any organized religion or dogma. I merely do not see this universe happening by some accidental occurrence. In that vein, I would hope that divine intervention into this debacle would at some point be undertaken by the real powers-that-should-be to put a stop to it. All that said, I will continue to follow your absolutely invaluable research into both the motives and the methods of the compassionless. It may be the best weapon we have in our arsenal – awareness. Keep up your exceptional efforts, for my sake and everyone else’s – you are a true patriot of planet Earth.

    • NES says:

      Your comments are smart. There’s no need for violence to occur. Not that I haven’t wanted to take an AK and wipe out the entire lot from time to time, myself. Believe me when I say, the Cabal is all coming down very, very soon. Crashing worldwide. When that happens you will need to avoid the mainstream media talk as it rises in timber/tone and avoid violence of any kind while focusing on what you can positively accomplish. Help your neighbor (even if you think he’s a dick) and we can all move into a healthier world using some of the technologies we paid for all our lives but haven’t gotten to use. We will finally be able to replace the outdated ones–the combustion engine, for example. Ugh!

  6. nosoapradio says:

    I’m going to regret this comment left in a secondary state but:

    any connection with JADE HELM?

    • NES says:

      Yes, apparently. Do you want the transcript mentioned in my earlier comment? If so, send me an email.

      • nosoapradio says:

        Yes, Sorry, I hadn’t read the comments carefully before posting (nor the article for that matter). I’m generally more attentive than that on this site.
        And this morning I remembered that it was ARCHIVES2001 who’d given me the idea of Jade Helm being an AI exercise on the Greek Drama thread.
        I should have some time later today to study the article, the comments and perhaps even formulate some cogent, pertinent or otherwise worthwhile response… At least I hope so.
        Anyhow, thanks for yours.

  7. candideschmyles says:

    Still think it will be some weaponised virus got loose that will take us back to the iron age.

  8. Knarf says:

    For at least the span of our lifetimes, non-deterministic general-purpose AI is and will continue to be a red herring. What we will face first is avatar robotics (human-in-the-loop) enhanced by autonomous subroutines, and a smaller subset of fully autonomous machines tasked to highly specific missions.

    For human-controlled avatars, autonomous subroutines will transparently take care of basics such as walking/running/flying across terrain. The human operator will only need to tell the machine where to go and what mode to travel in (max speed, max stealth, max energy efficiency, etc). Autonomous subroutines will handle details such as balance and obstacle avoidance.

    Until relatively recently, using a robotic avatar would have been constrained by the range of the RF control loop, and the bandwidth of same in regards to feeding back high definition imagery and other data to the operator. That issue has been neatly resolved in many if not most places by the 4G network, and of course even higher bandwidth is forthcoming.

    As for autonomy, the DARPA competitions illustrate how an autonomous robot could be tasked to enter a hazardous area and carry out very specific tasks, such as closing valves. It’s not much of a stretch to envision an expendable autonomous robot being tasked to enter a structure on an assassination or abduction mission. The robot would have a detailed map of the interior, it would “know” how to breach doors or windows as necessary, and it would use facial recognition or some other pattern-matching algorithm to discriminate targets.

    The definition of “reasonable precautions” in terms of personal security is going to change drastically. Door locks and burglar alarms will not deter an expendable and powerful machine on a mission.

    Our individual choice is anticipate and adapt, or find ourselves at a disadvantage. There is no way back, other than immolating civilization. If by some miracle the entire US infrastructure forswore and outlawed autonomous robotics, the Chinese, for one example, would not. The technologies of the robot future have already been developed, down to even the comm network. And you thought 4G was for streaming your entertainment…

  9. NotDole says:

    At this point, I fully believe (other than the magic aspect), that the prophecy of the RPG tabletop and also SNES and Genesis classic games and now a really great game on PC that renders justice to “future” AD&D it was classed as has come true. Seriously, get the second edition (the third edition although superior to play with doesn’t have the same backstory as to how the world is in 2052 in the Shadowrun 2nd edition main book, the one you could only use to play, the third edition had some changes to the story but anyway.

    One day, we’ll have to have SIN cards, not implants, unless you want to, but that’s more expensive, and the rest of us will be SINless. SIN standing for System ID Number, acronyms to SIN and is pretty much the wet dream of christians thinking the code bars on goods are the mark of the beast, as these cards would contain all your info, no need for a wallet, its a debit/credit/drivers license/college card/access card at work/etc. The SINless fight the corporations with weapons, countries becoming mere tools of corporations by 2062. Also unfortunately, since the magic element isn’t there, half of north america isn’t fully regrown magically with sequoias and huge mountain tops that were sawed off and all lakes and all ecology in the western part of america and almost all of Canada, except Seattle which is an exclave of the UCAS, US of Canada and America. I’m pretty sure if I have children, and that’s a big if, I have a decent enough salary to bring someone into this world, but do I want to ? If he he’ll have to live as an outlaw if he puts into practice everything I will teach him or her. Which means weapons, unfortunately, but also radio jamming to fuck with cops and well, you see where this could be going.

    And that’s almost the positive of it. I don’t think AI’s will be ever anything like Skynet with a nuclearized space with missiles pointed at our beautiful planet, just to make it blow away.

    These people must really hate themselves to want to die so much. Must be all the snuff films they’re apart of.

  10. LiquidEyes says:

    Humans have been inventing since we first utilised fire, it’s literally in our DNA, and just as with fire, technology can be turned to good or bad purposes. Technology, in and of itself, isn’t inherently good or bad, these meanings are derived from how technology is applied.

    Weaponised Robotic AI is scary, I won’t argue with you on that, but it only becomes truly horrifying when you factor in something as malevolent as the state and the manifold ways in which it grants fewer individuals the covert ability to cause more harm and be more invasive than they could without it.

    In all honesty, we should be speaking to whether society is adequately prepared for AI or not, and the fact of the matter is that if we were we wouldn’t be having this conversation.

    AI is scary because some people are scary, not the other way around.

    • Knarf says:

      Right, LiquidEyes. Robots and whatever extent of autonomy they are equipped with, should be viewed as what they are fundamentally, which is a toolset. Man is a tool user.

      While it makes good science fiction, the notion of a self-aware AI being given unrestricted control of WMD is just not in the cards. Human beings are not going to hand over that kind of power to a non-deterministic algorithm in the foreseeable future – it’s not in our nature.

      An analogy is firearms: They are as dangerous or beneficial as the user, and for better or worse there is no going back to a world without them.

      Society is never fully prepared for innovation, it always has to catch up after the fact.

      • BennyB says:

        Knarf said:
        While it makes good science fiction, the notion of a self-aware AI being given unrestricted control of WMD is just not in the cards. Human beings are not going to hand over that kind of power to a non-deterministic algorithm in the foreseeable future – it’s not in our nature.

        I don’t know, Knarf. If human beings are psychotic enough to have created WMDs in the first place, with enough nuclear weapons to wipe out all traces of life on the planet probably one hundred times over, I’m less than confident that the ruling class will exhibit the capacity to make any rational decisions on whether or not it’s prudent to give R2D2 clearance to fire off the nukes should the technology arrive to do so “safely”. We’ll just end up with a droid race. What could possibly go wrong? 😉

        Who knows though. Maybe the AIs will realize the ruling class (not the average person) are a threat to mankind and do us all a favor and neutralize them before they destroy all of us, AIs included.

        • Knarf says:

          “…enough nuclear weapons to wipe out all traces of life on the planet probably one hundred times over”

          Life on this planet (and almost certainly elsewhere) has survived staggering cataclysms, far beyond what could be theoretically inflicted by detonating the entire human nuclear arsenal. We really aren’t that powerful.

          But in fact the situation with nuclear armaments is not as officially described. Big surprise, governments lie and fear-monger. Consider for a moment how all our fears concerning nuclear weapons, ALL of them, are ultimately referenced to the “official line”. Why should they tell the truth concerning this particular subject, and virtually nothing else?

        • BennyB says:

          I’m not sure it’s a matter of whether or not the authorities are ‘telling the truth’, as you put it, more than it is a matter that we’ve seen (courtesy of the United States) what the aftermath of a nuclear bombing looks like and it’s certainly not something I’d want to see repeated. I agree with you that we have no reason to believe any of the “official” accounts of what various nuclear programs consist of, what the effects of some form of catastrophic series of detonations would look like, and that we could probably survive as a species, even under the worst case scenario. Again though, that would probably be a pretty grim aftermath. (I know you’re not arguing otherwise)

          I honestly appreciate your optimism though, Knarf. I think there’s something to be said for not letting the worst case scenarios overwhelm you. Particularly when there’s a constant effort on the part of TBTsB to use various forms of apocalyptic alarmist rhetoric to advance various nefarious agendas.

      • Interesting points Mr. Knarf. I see your point about robots, and everything else around us, as tools. I think that’s what separates man beyond mere animals – our ability to harness the environment to aid us in this world. That isn’t a bad thing.

        I’m personally more cautious of “robots” than other technologies, like guns. Robots would have the tendency to make human dependency increase exponentially. If we are going to live in a digital smart city of the future with robots, AI, “pre-crime” and “test tube babies”, biometric bliss, what role is there for the individual? What do we do? What is our reason for being? If everything that we used to do, like make music, write, build, compute, make families is supplanted by boundless robotic AI technology, what is left for man? The logical consequence of this I see is marginalization of the human form. And who controls the technology of the robots (assuming a lot of things here on my part), controls that narrative. Given the technocratic transhumanist psychopaths that control governments, banking and corporations, and who are the ones pushing and introducing for this AI-ification of our perceived reality, I’m not as hopeful as you.

        • Knarf says:

          “ If everything that we used to do, like make music, write, build, compute, make families is supplanted by boundless robotic AI technology, what is left for man? “

          We’ve had ubiquitous computing for at least 30 years now and it’s a fair question if the technology has enhanced or hindered human creativity on a net basis. For me personally it’s been a wonderful tool, but on the other hand I don’t have a “smart phone” Skinner Box to absorb every free moment and distract my driving.

          There’s nothing special about placing the term “robotic” ahead of “AI”, BTW. A computer is a computer, an algorithm is an algorithm. A robot is actually the last place to expect to find state-of-the-art AI, because of the constraints on the quantity of hardware and available power. Intense computing sucks power and generates heat, just ask anyone who participates in grid computing.

          In fact the primary constraint on robotics now is power density. We are nowhere close to building a machine which can store and convert energy as efficiently and quietly as the human body.

  11. bladtheimpailer says:

    It seems to me that the products of scientific inquiry, which will naturally be in progression as humans are want to expand their knowledge as a part of our DNA, that the use of new knowledge needs proper controls. Proper control should not have profit motive, technological advantage over others or other societal harming ideologies-strategies as a basis of control and review of new knowledge. We have no need of kill bots unless some dystopian vision of policing is in the works. At the international level we have states run by oligarchs who have their finger on the nuclear weapons button so the need for kill bots at this level is redundant but will be pursued no doubt as part of the MIC’s next generation of must have war violence products. Kil bots are being developed by the same command and control centres we all seek to at least remodel…so kids don’t forget to smash the state.

  12. BennyB says:

    I’m honestly pretty surprised that the consensus here so far seems to lean towards the idea that AI, in general, but in warfare specifically, is not something which should be looked at as an imminent threat. Personally, I don’t find it that hard to imagine a Terminator/Skynet type scenario particularly far fetched, at least in some shape or form. Ultimately the practical nature of how it is that AI interfaces with society will largely have to do with the nature of the context in which it’s designed to function. Given the fact that most of the high level experimentation that’s going on in the field is geared towards military use, it’s hard for me to imagine how AI with this disposition could function without becoming a serious risk.

    In theory, strong AI would be able to advance beyond the human capacity for intelligence, as a piece of technology potentially could continue to advance far beyond the biological limitations that dictate human intelligence. Yes, this is a long way off, but I have little reason to believe that the mentality of the elite and the military industrial complex will evolve, even in the face of serious concerns that surface as the technology advances. (“Well Bob, RoboHog decimated the entire population of Afghanistan, including all of our personnel, but otherwise the experiment was wildly successful…”)

    Whether or not some form of altruism could be hardwired into a sentient robot is a stand alone argument with its own ethical implications. The recent movie “Ex Machina”, which I thought was brilliant, touches on this. One of the questions posed in the film is whether or not the AI robot being tested is actually genuinely demonstrating empathy and self-awareness or just pretending to do so in order to manipulate those administering the test. Cleverly, the film doesn’t give us a conclusive answer, however it’s a perfect example of how the psychopathic elite carry out their agendas by manipulating and fooling enough of the general public into accepting the “official” narratives and lies which mask truths that would defy even the most basic levels of humanity or morality.

    I think artificial intelligence with some sort of boundaries when it comes to defining enemies is inherently problematic as the superficial, irrational, and contradictory nature of warfare makes the drawing of such boundaries a concept which defies logic in the absence of the emotional spin and propaganda which is used to justify and perpetuate it. If a robot/machine is designed to make autonomous decisions about how it goes about defining self preservation or elimination of “threats” it’s not that difficult for me to imagine how some of the nightmarish scenarios found in science fiction films could come to be reality.

    On a side note: there’s a great short story by Philip K. Dick, “The Defenders”, which is about a post-nuclear war scenario which is being managed by robots above ground where humans can no longer safely operate. The robots are essentially serving as the post-human representatives for the main superpowers, the United States and the Soviet Union. It’s quite clever and is clearly taking a piss out of the absurdity of the mentality of war, particularly nuclear warfare. It’s nothing like Terminator, Robocop, or Bladerunner. However, Blade Runner is actually an adaptation of Phillip K. Dick’s “Do Androids Dream of Electric Sheep?”. Both the film and the book are favorites of mine respectively.

    “The Defenders” is within the public domain and is widely available as an ebook. I highly recommend checking it out.

    • BennyB says:

      On an unrelated note:
      James, I think your comment about being “back at the wheel” (or “back behind the wheel”) wasn’t necessarily the best choice of words to wrap your recent podcast regarding digital automobile hacking “conspiracy theories”. 😉

    • nosoapradio says:


      Ex Machina was indeed an interesting film: clearly the android AVA’s capacity for empathy was weaker than “her” desire to be free to live a humanlike existence, and “her” outrageously disillusioned, alcoholic, insightful, manipulative and brilliant maker knew this. Yet, “her” brand of empathy was strong enough for “her” to understand how “her” would-be liberator functioned and what he needed to experience to become her ally.

      AVA also clearly had little or no empathy for the other sexy, dancing, Geisha playtoy droid, though she coveted “her” “skin” and “her” superior human appearance.

      What was provocative in this film was at least two-fold for me: one: the humans’ typically human capacity for empathy for a particularly attractive and endearing android or for a fellow human being is their undoing, (even the playtoy seemed to empathize with “her” fellow droid leading to her destruction)

      and two: the film insisted visually and dynamically on the theme of reflections and I could not help but see a message about man’s capacity to recreate the only thing “he” knows: human nature. Humans whose burning and ruthless desire for freedom to “be” is stronger than their empathy for the other. Each of the 4 characters betrays another character in his or “her” pursuit of a profound desire: for freedom, for vengeance, for knowledge and or companionship and empathy and, of course, love. Humans cannot recreate what they do not profoundly know. They cannot create consciousness that would willingly self-destruct in another’s interest.

      Humans, reflected in androids, reflected in humans, in windows, and cameras, water and snow and inspiring untamed nature and under the “skin”.

      p.s. “under the skin”, I realize, is also a strange sci fi film about empathy and the fatal “humanization” of extraterrestrial life packed into the irresistable curves of Scarlett Johansson…

      • BennyB says:

        I’d say one of the general takeaways from the film was that it would be extremely difficult, if not impossible, to test the consciousness and capacity for empathy in a robot with strong AI in a way which is ethical. Nathan, the creator, is a narcissist, willing to take advantage of those involved in the experiment to find out whether his creation demonstrates “true” AI, but in the end is he right that not evaluating this creation’s capacity to demonstrate that they posses a degree of genuine empathy towards humans a potential threat to humanity? How much of the risk associated with the experiment he’s working on is associated with the way he’s going about doing it? As you mentioned, are the potential faults of his creation merely a reflection of his own?

        Ultimately, as I indicated in my previous comment, those with the necessary resources to actually fund the development of a sentient robot probably wouldn’t be the kind of role model you’d want for anything that would genuinely contribute more to society than what they extract or degrade. Bill Gates wonders aloud why ‘some people are not concerned’ about research in the field of AI. Given the fact that it would most likely be someone like him creating this type of entity, this is one point which I might actually agree with him on. A robot with the capacity to “innovate” beyond what Gates is able to do is not the kind of experiment I’d like to see come to fruition.

        Regarding “Under The Skin”, I’m not sure whether having a better idea of what was actually going on ahead of time (to the extent that this would be possible) would’ve made this film any less disturbing. I saw a funny comment on IMDB when I was trying to figure out what the hell I just watched, where a user laughed about the idea of the response on behalf of men who watched the film just to see Scarlett Johansson nude. It’s sort of an appropriate analogy for what plays out on screen.

        • nosoapradio says:

          It’s indeed a shocking moment in the film when Nathan admits that he created something that he fundamentally despises and is afraid of but that he went through with the experiment because eventually someone would have done it anyway and he trusted himself more than say… someone like Bill Gates.

          This pre-existing loathing and fear in the creator would seem to have doomed the experiment to its final bloody ending…

          Perhaps it is less the testing of the consciousness of a created intelligent being that is unethical than the very act of creating a potentially conscious being within the context of a scientific experiment that must be judged sucessful or unsuccessful – creating something that is doomed to suffer within the context of a controlled experiment; in other words, the very act of attempting to create consciousness is itself unethical. Unless you’re sure to succeed and allow the “creature” to live a fulfilling existence… if you see what I mean…?

          On a slightly different note; I wonder to what extent, should intelligence be something measurable, the degree of a person’s intelligence, “his” capacity and need to distance himself from a phenomenon in order to understand it, negatively impacts “his” ultimate capacity for empathy…? I guess this comes back to what I already said about the desire for knowledge and expanded consciousness being stronger than anything else such as empathy or wisdom…?

        • nosoapradio says:

          re: the onscreen/offscreen analogy irony of men’s reaction to under the skin, ironic and somehow touching…

          Those men, irresistably attracted to Scarlett Johansson’s body suffer the same fate as the extraterrestrial being itself…
          “She” ultimately is stripped of her own borrowed skin, betrayed by her own irresistable attraction to her own image along with a profound curiosity and desire to know the human other…

          life always outdoes fiction but can they be faulted for it…?

        • BennyB says:

          “the very act of attempting to create consciousness is itself unethical. Unless you’re sure to succeed and allow the “creature” to live a fulfilling existence… if you see what I mean…?”


        • Lively discussion folks. It got me thinking. Obviously, those of us not in the game of control and of telling other people what to do (politics) take empathy into account in our daily actions. Those that tend to not have it are the politicians, bankers, corporate bigwigs and military ‘leaders’ that tend to also have psychopathic tendencies. Consequently, their ability to empathize is missing which is why they coin phrases like “collateral damage.” Consequently, there is this dehumanized, robotic coldness to these ‘leaders’ and ‘movers and shakers’ that allows them to ‘rise to the top.’

          Thus, if they are behind pushing this technology, it would only be expected that these so-called robots will be reflections of these cold individuals, kind of like Ava, whose “creator” Nathan, head of Bluebook (which obviously conjures up images of Facebook, Google, and even Project Blue Book), was akin to a Bill Gates/Mark Zuckerberg-type.

          Lastly, I don’t think there is even the possibility of an algorithm that can “program” empathy into a machine as this is human, all too human, starkly different from the binaries that comprise computing. I suppose this is my personal bias as I believe in the idea of a soul, which makes humans more than just a blob of protoplasm and machines like some of our transhumanist technocrats would rather we be.

        • BennyB says:

          This is a great comment, AoC. It’s inline with what I was thinking when I was watching the film. Ava, in essence, functions precisely the way the psychopathic political elite class do. They’re “human” enough not to give themselves away, even to convince us that they actually care about us and have feelings themselves, but in reality their own self-interests are what carry the day. It’s easy to make decisions that further your agenda when you don’t have to be weighed down by ethical standards or consequence. If your only goal is to increase profits as a corporation, you go where the cost of labor is the cheapest, regardless of whether you destroy the livelihoods of thousands of people in one place and put the lives of another group at serious risk to increase profit margin at the expense of the most basic safety standards. If your goal is to secure the natural resources of a region, the plight of those who’s lives you throw into chaos to implement the control over the resources and infrastructure aren’t consequential.

          It’s an unpleasant yet logical progression. People gravitate towards the type of leadership in times of war for example, because what’s needed is somebody who can make the best strategic decisions without being bogged down by the moral implications on a micro level. In times of peace however, the same type of behavior promotes the same type of heartless and rash decision making when logical necessity no longer dictates these measures.

          As I indicated previously, it’s not necessarily that I think the idea of artificial intelligence on some scale is, in and of itself, an inherent risk, but based on the types of tasks such entities would likely be programmed to perform and on who’s behalf they’re performing them (those writing the checks to fund the research), the end result, in my opinion, would almost undoubtedly be something hazardous. The only real restriction on the super elite when it comes to achieving their objectives is the intellectual mechanics of outsmarting society as a whole and interest in self-preservation: i.e. – not finding one’s self dangling on the pitchfork of an angry mob.

          If artificial intelligence were to reach the threshold of singularity, based on the afore mentioned, to quote Shane Legg, Co-Founder of Google Deep Mind; “If a super-intelligent machine decided to get rid of us, I think it would do so pretty efficiently.” Essentially, the super elite are only interested in preserving us to perform necessary tasks as workers, or as entertainment; artists, musicians, gladiator-like sports figures. I think transhumanists like Ray Kurtzweil envision singularity as a sort of fusion with technology, but within that I think lies some of the hubris of these sorts of figures, like Bill Gates for example.

          In another set of sci-fi movies movies, the “Alien” series (Alien, Aliens, and Prometheus being among my favorites), you have two main villains: the Xenomorphs (Aliens) and the Weyland/Weyland-Yutani corporation, who (not including the Prometheus plot) are interested in obtaining the Xenomorph to use for their bio-weapons devisions. The corporation, in very realistic fashion, puts the lives of those who come directly into contact with this threatening species in jeopardy in order to utilize this hostile creature for their own sinister purposes. However, it’s proved time and time again that efforts to control such a dangerous species, who’s only apparent goal is to survive and reproduce at basic primordial level, ultimately fail and it’s the hubris of those who believe they are smart enough to control such a dangerous entity to serve their own goals which ultimately leads to there own demise.

          Unlike the concept behind Alien(s), I don’t think artificial intelligence (or various attempts at an autonomous robot) is by default inherently bad or dangerous per se. However, so long as it reflects the interests of the super elite, I think there’s good reason to be very wary of efforts to advance this sort of technology. I think many of the solutions to problems we face as a society could be positively addressed through the harnessing of technology. I’m not a rejectionist of technology by any means, but I think we have to always remain conscientious of the fact that the goals of most of those who are presenting advances in technology, from gadgets, to games, social media, to useful programs that simplify mundane tasks, we have to always be wary of the concept that the underlying motives of those pushing the technology (not necessarily those developing it) are, at best, only superficially altruistic. To master the technology without being mastered by the self-appointed “masters” of technology, you could say, is something worth striving for.

    • Knarf says:

      “ Personally, I don’t find it that hard to imagine a Terminator/Skynet type scenario particularly far fetched, at least in some shape or form.”

      With respect, if your perception was informed by experience in software specification and coding, and a career working with computing platform hardware, rather than works of fiction, the Skynet scenario would not be an immediate concern.

      If within our lifetimes we are led to believe some algorithm has become “aware” and gone “rogue”, it will be false flag cover for typical human skullduggery.

      • BennyB says:

        I think I’ve failed to articulate my real point here, so I’ll attempt to do so.

        My concern about AI doesn’t have to do with robots arriving at some sort of ‘consciousness’ and turning against “us”. We already have a class of cold psychopathic elite which carry out this function. Advances in technology simply allow them to carry out their agendas more efficiently and effectively. When I mentioned a ‘Terminator/Skynet scenario’, I’m not talking about an algorithm where robots become “aware” or go “rogue”, I’m talking about a scenario where a piece technology (military/weapons in particular) does what it’s “supposed to do”, but perhaps in ways which even those who designed it hadn’t intended.

        As someone ‘informed by experience in software specification and coding, and a career working with computing platform hardware’ I’m sure you’re well aware that you can make a small change to one thing which leads to a cascade of errors which can become a debugging nightmare, particularly if it’s not spotted immediately, temporarily rendering whatever you’re working on partially or completely non-functional. This could mean having clients screaming at you, massive headaches, and sleepless nights, but in most case scenarios nobody’s actually going to get killed as a result (although there may be threats ;-). Our collective military firepower is dangerous enough as it is. I find the concept of designing aspects of it to perform tasks, normally carried out by humans, autonomously pretty frightening.

        Hopefully this clears things up some.

      • BennyB says:

        Also, just for additional clarity, when I’m talking about “robots” in military or policing situations, while the bar for conduct is often remarkably low, there are in some cases limitations to what people are willing to do. Or at least how long you’re able to control them with various forms of threats or propaganda to continue doing so. If not, beyond that, if it’s at the cost of life and limb, eventually something’s likely to give. A robot or other piece of military technology, again like the super elite, does not have a conscience and therefore will carry out the task it’s been programmed to do without reservations and without fear of death.

        The other comment is “awaiting moderation”, so if this doesn’t make sense as an “also” that’s probably why…

  13. nosoapradio says:

    Maybe the very dynamic that constitutes consciousness itself is the same that ultimately destroys itself; (hopefully after creating another entity capable of consciousness able to sustain the destruction of its creator): that dynamic being the unquenchable thirst to expand itself, to exponentially learn more, that always exceeds the thirst for a wisdom that would have, had it been given the time, saved that consciousness from its own unwitting self-destruction.

    that fine line between simulation and authentic phenomena…
    Are elaborately arranged, electically-powered man-made microchips inferior elements for perception, memory, reasoning, consciousness than organic flesh, blood, bones and grey matter?

    and might consciousness not be something that innately seeks to recreate and multiply itself, like a virus; this frenetic and uncontrolled need to recreate itself, preferably in some more resistant form, leading to its own destruction…?

    Perhaps consciousness’ desire to exist exceeds its need to preserve its weak, unwise and obsolete host; in this case; humanity…?

    I’m starting to sound like a psychopath…aren’t I…?

    Well, perhaps some existing superior cosnciousness will take pity on conscious humanity who perhaps like all consciousness is more desirous of knowledge than of wisdom, and will protect it from its own folly while harvesting the fruit of humanity’s creation of a new form of resistant consciousness?

    Have you seen Her or Automata? Should I perhaps knock off the Côte du Rhône?

    • nosoapradio says:

      Come to think of it…

      I think I’ve taken a gigantic illogical leap from AI to EC…

      EC being Engineered Consciousness.

      At any rate, my need to post certainly seems to exceed my need to make pertinent sense of the subject at hand…

      an innate and inintended defect in my case…

  14. Colin Green says:

    To a degree, I liken it to the gun debate. Guns don’t kill. the people aiming them are the problem. I can see how it would be the same with AI and the programmers/owners.

    As for AI true self awareness. I suspect it will never happen. We don’t fully know or understand our own minds (certainly useless at fixing them). I find it difficult to imagine us capable of creating a development matrix for AI self awareness when we don’t understand what it really means.

  15. Although I do not subscribe to the idea that robots will ever have “self-consciousness” in the sense that we are aware of our awareness (does the concept of a soul apply to robots?), I do believe given the global AI grid via the “Internet of things” you can have giant supercomputers which can crack so many variables at once that it would be a like-class of “consciousness” simply by being able to process the loads of information available.

    Jay Dyer did a great analysis on this theme in relation to the movie “Ex Machina.” It’s a very good read and I urge folks to check it out.

  16. BennyB says:

    Thanks for sharing, AoC. nosoapradio and I discussed this film earlier here.

  17. idsstudio says:

    While I haven’t read through every single comment, none of the comments I’ve read here seem to address the gargantuan elephant in the room…

    I don’t think that the agenda here is to create AI and force it on humanity or address the fear of AI becoming self-aware. I think it is to sell us ‘the idea of AI’ which has been shown to fascinate people. Once we buy into it, it can be integrated into our lives to a point where we are dependent on it. It can be woven into military and other spheres, and then of course laws can be used to solidify the integration. Then the psychopathic social engineers can slowly take control of these ‘robots’ to use as a direct control mechanism. Thereby removing the pesky human element that has empathy from the chain of command. People need to understand that the so-called elite are control freaks that have zero empathy. The only reason why it is taking so long for them to completely dominate us, is because they have to delegate to and manipulate humans. Remove that and you have very sick in-humans with joysticks controlling very sophisticated machines which are now integrated into our everyday lives.

    So yes, I do think it is very big problem! That definitely must be addressed!

    • idsstudio says:

      And furthermore, I would suggest that one really effective solution may be to force all technology to open source, so that there is never secret central control and always mechanisms to disable certain things.

      Just a thought.

  18. BennyB says:

    Right, AoC. As, I attempted to highlight in my previous comment, my concern is not that robots would go “rogue” or turn against the population, it would be just a matter of them becoming functional to level where the expendable class becomes more and more obsolescent to the elite. If a robot can do the same task or better than their human counterpart it’s not particularly difficult to imagine which option the elite are going to choose. While I believe I’ve also stated that the bar for conduct on the part of military and law enforcement conduct working on behalf of the system is typically depressingly low, there are limitations to what these forces are willing to do (even if they’re minimal). With robots or autonomous weapons systems the elite don’t even have to worry about placating those who enforce their policies.

    All of this said, while there’s much that disappoints me about societies as a whole, I do believe in the greater good of the human spirit. As you mention, open source software has truly been revolutionary in allowing people to cooperate, share ideas, and break new ground outside of the traditional professional realms in the past where a specific job or company position was necessary to have access to certain technologies and what you were allowed to do with these tools, at best, was subject to approval.

    It’s important to be wary and realistic about the ways technology is being used against us, but it’s just as important, if not more so to remember that there’s strength in numbers and in the human spirit. There’s no reason to believe we can’t harness the power of technology to outsmart the rulers and come up with solutions to problems working, networking, and cooperating outside the boundaries of the traditional top down structures of power.

  19. bharani says:

    Hi all,

    I think we are taking a very materialist approach to consciousness. It may be interesting to think about what mind is and is not.

    In ACQUIRING GENOMES A THEORY of the ORIGINS of SPECIES, Lynn Margulis and son Dorion Sagan make the convincing case that microbes/bacteria are selves. While mind may be debatable at the level of bacteria to me it is clear some form of intelligence is working with matter/form. Here is the quote.

    “Autopoiesis,” literally “self-making,” refers to the self-maintaining chemistry of living cells. No material object less complex than a cell can sustain itself and its own boundaries with an identity that distinguishes it from the rest of nature. Live autopoietic entities actively maintain their form and often change their form (they “develop”), but always through the flow of material and energy. For any organism, any autopoietic entity, a specific sustaining source of energy (such as visible light, methane oxidation, or sulfide oxidation) can be identified, as well as a source of carbon (such as sugar, protein, carbon dioxide), nitrogen, and other required chemical elements. [Acquiring Genomes a Theory of the Origins of Species, page 96]

    In “The Embodied Mind: Cognitive Science and Human Experience” It discusses just what is involved in the project of cognition. The 1st edition is over 30 years old. And AI didn’t really take off until it finally admitted it needed to learn from life, especially babies of all species. This book made the case for that.

    Genetic Engineering is what is needed to augment robotic systems. Why? Robots are just to expensive and they cannot replicate (not for the foreseeable future). So you need cheap not to smart easily replaceable adaptable beings with the ability to evolve real time. At the present time only life has that ability.

    Something to think about

Leave a Reply

You must be logged in to post a comment.

Back to Top