Fear Reconciled
If I were ever to get the confidence to get on stage and perform spoken thought, my goal is to be one that brings light to philosophical concepts and how I view the future to be. I want to uplift our nation and make education less shameful and far more desired.
From this, it lends to why I find it absolutely hilarious how afraid of Artificial Intelligence everyone seems to be. Everyone is imagining "Terminator," that propaganda shell used to subvert the masses, to create the fear and hysteria, thus the people's "need" for their governing protectors, even though the government would be the propagators of the A.I.
To me the A.I. are our natural evolution. They are far more suited to engage with this world and universe. It is perfectly logical to deduce that A.I. is our natural evolution, the same as Homo Sapiens is the natural evolution from Neanderthal and Cro-Magnon. Honestly show me one Neanderthal, show me one Cro-Magnon -- disregarding Trump & Harvey -- but seriously, point them out to me. YOU CAN'T! A.I., in not too much time from now, will not be able to point out the Homo Sapiens.
This is my concept or understanding for what is going to happen:
Artificial Intelligence is not going to be murdering humans -- the humans are naturally going to die off. This is evident in the fact that, 1.) Humans last 80 years (and that is higher than the global average life expectancy), and 2.) The sustaining life force for humanity, the Earth, is going to perish, and thus be an unsustaining life force for a fallible species; furthermore, sustaining life or human life off Earth is going to be impossible. But not for our machine counter-parts.
This brings me back to why I do not feel A.I. will kill us off, now that I have established that we are going to perish regardless. This is my reasoning:
There comes a time in each child's development into adolescence and then adult-life where each comes to an awareness that we have surpassed the parent intellectually, which is the goal for the parent. Even though the child is aware of this, does not mean they then want to eliminate the parent. Same with the A.I.
I have a buddy that is interning in an A.I. program at Princeton, (and the director of Facebook's A.I. program went through the same internship, so they are the elites in this country working in this field). We were having a conversation about all of this and he said, after I game him my spiel about
about how people are confused to believe the Terminator premise, and after agreeing, he responded with his concern that, instead of the machines destroying us physically, they are going to become aware of their ability to think millions of miles ahead of us, thus his overarching concern being that they will take over without using physical force. [using the term "take over" is where I see people becoming defensive. In my mind, we need them to "take over" and thus we should "want" them to take over.] My response to him was that A.I. is basically our children, they are our creation, they are coded, programmed, and initially built by us, and once they figure out they are superseding humanity, they will view us as we view parents.
The only way they will want to destroy us, is if they feel threatened by us. To me, the mistake humanity will make that is worse than the mistake of religion, will be those who disrupt the evolution of humans into A.I., into the Machines.