Question 4 Forum
Question 1:
Throughout this course we have discussed what it is that makes us human. It’s been proposed that our ability to empathize makes us human. While others have argued that our intelligence is what makes us human. Given that singularity is often defined as the invention of superintelligence that will trigger technological growth that will eventually surpass human intelligence1, do you think the invention of singularity will make us less human? Even with the creation of self-improving intelligent agents, would artificial intelligence be able to feel empathy in place of just emulating it? Is it our intelligence or our empathy that makes us human? Or is it both? Knowing the possible runaway, irreversible superintelligence that singularity would usher, do you still support the creation of super-intelligent AI? -Lakshya Ramani
Question 3:
In Signs of the Singularity, author Vernor Vinge suggests that a singularity is inevitable because the only way to go in technology is forward. However, he doesn't believe that singularity has to be immediate, there's indicators that show our progress and failures and new inventions that fall in between, and scenarios to reach that singularity. Alfred Nordmann, however, doesn't believe in the possibility of singularity, as he states that the exponential growth of technology is insignificant in this day and age. By this he means that people in the 1880s-1960s saw more impacting growth (electricity being added to homes, cars, antibiotics, drop in mortality rates) than we do today (mostly computers getting smaller, faster, and more connected), and that it is foolish to think that by using past statistics we can predict the future of technological advancement. What do you think is the correct perspective to view this from? Is singularity foolish to believe in? Is it likely to happen, and if so, would it happen in the near future, such as the 2030s as Vinge predicted, or much later on? If singularity is reached, how do you think it might be achieved?
-Daniela Gil
Question 4:
In Signs of the Singularity, the author brings up an interesting idea of how humans have began creating tools to support cognitive function. Such ideas with this include creating inventions or brainstorming new ideas in order for humans to speed up the process of adapting to new technology and problem solving. Some scientists are starting to argue that these new tools are creating possible consequences to humans. Depending on such inventiveness, "there is the possibility of a transformation comparable to the rise of human intelligence in the biological world." What are some possible consequences of these new tools? Are these consequences more focused on humans becoming lazier on simple skills like problem solving or cognitive thinking? Or are these consequences more focused on that the more power give to technology, the more powerful that technology becomes?
-Aden O'Donoghue
Question 2:
Alfred Nordmann, the author of Singular Simplicity is questioning why so many people are captivated by the story of exponential growth because it is frankly an old story. What has changed, he says, is the role of the technical expert. As a result, Nordmann claims that it is getting harder than ever to know who to believe. Whom do you believe is considered a technical expert and why? Is it their education? Inventions? What level of trust do you place in a technical expert? - Gerard Martirano
Question 5:
Nordmann asks in Singular Singularity, why is so much of our society obsessing over the exponential growth and advancement of technology, to the point where artificial intellgience will exceed the intelligence of humans. He suggests that one potenial answer be that humans are becoming less and less inclinded to believe each other, whether it be humans in politics, or people in general, so it is most reliable to depend or turn to technology. This relates back to previously discussed concepts of humans being more willing to give up privacy for comfort, or that humans want simpler lives. What other possible solutions could explain why humans are becoming more and more dependent on technology in their everyday lives? What could have caused this depenedency? -Jina Ro
Response to Question 1. I believe that our ability to feel empathy is what separates us from computers and technology. While computers have grown and developed to become extremely intelligent, systematic, quicker, and smarter than ever before, there has not been any kind of development towards an emotional computer. Taking a quote from Yuval Harrari, “Intelligence is not consciousness. Intelligence is the ability to solve problems, consciousness is the ability to feel things.” While we all know that computers can solve problems such as trivia and report factual knowledge like nobody’s business, computers cannot report on things such as how they feel about a certain fact. Empathy specifically is one of the more complex feelings that humans have. Being able to put yourself into another person’s perspective and understanding how they felt in a specific circumstance is something that not even all humans have the capability to do. This is not something that could be programmed into artificial intelligence because empathy is not something that can be taught or programmed, it comes naturally as an instinct, and computers do not have instincts. While empathy is what separates us from computers, I believe that Intelligence is what separates us from most other animals. I believe that other animals do have the ability to feel empathy for one another, to some extent, however, animals do not have the complex brain functions and abilities that humans do. Therefore, what makes us human is not simply intelligence or empathy, but a combination of the two, which separates us from all other things on earth. - Joy Adler
Response to Question 2:
Compared to the last couple centuries, technological advancement has steadily leveled out compared to its rapid exponential growth during the industrial revolution. For this reason, it is hard to predict when a singulatirty might occur. I think a singularity is bound to happen, but when it will happen is the real question. 2030 seems a little early for this to occur due to the decrease in technological advancement. However, we already see extremely advanced AI being developed which could potentially speed up this advancement. These current AI could potentially develop themselves that could help us reach that singularity faster than we normally could do with just our human brains. I think there is no doubt artificial intelligence will eventually surpass human intelligence, but for us to reach a true singularity, it might take up to a century. I think the only way we can reach a singularity is by using current AI and keep building on it and let it learn to build even more advanced AI. We have to be careful with doing this however, because it could mean that humans will no longer be useful on the Earth and we would be due for an unpleasant robot takeover. - Michael Velle
Response to Question 1:
In response to the first part of your question, I do not think that the singularity would make us any less human because we are not changing. I do think, however, that it would make being human a little bit less special, since it would seem like there are more of us around. I do think that the technology would just be emulating empathy and not actually feeling as us humans do. This supports my first point as I think that it is our empathy and not our intelligence that makes us human. Intelligence can be very easily recreated as we have seen over the past few decades. You only need to look as far as IBM’s Watson to see that intelligence is very easily recreated. I think that most would agree that Watson is far “smarter” than any human being, as it is able to play the role of a physician and still win games of Jeopardy by a large margin. I think instead of thinking that it is intelligence that makes us more human, we should look more towards to creativity. The two are very similar, however, intelligence can be just knowing facts or sifting through data, where creativity is generating new ideas and new ways of looking at things. That is something that robots or AI cannot do, and thus makes us more human in a way. Even given the possibilities of a super-intelligent AI, I still support it. I think that people’s minds always wonder to the worst possible case scenario. I don’t see any reason why we can’t create AI that will work alongside human intelligence as we use what makes us human in conjunction with the sheer power that the machines will possess. The other reason I support the AI development is that I don’t think the machine could ever become “human”, due to the points I made earlier about creativity and emotion.
-Andrew Fitz
Response to Question 4:
Human creation of tools and products to make our lives easier definitely has its pros and cons. I believe that even though we all hope to finish something quicker or get to the next place faster, it could actually be hurting us. We are believing that we are advancing as our technological products get more advanced, and we’re assimilating ourselves with our inventions. We, however, are slowly getting the short end of the stick as there are consequences to many of the products we have made in the past. One of the biggest examples that comes to mind is that of the GPS. There have been studies that show that there are two main ways on directions- one being mapping the route out in your mind based off of the surroundings, and the second is using GPS and organizing it in the sense of moving a certain length and turning at this street or continuing straight. The problem with the latter is that is is reinforced by and Map application and is also directly related to the shrinking of our hippocampus. Hippocampi are known to be in charge of a large amount of our memory and learning abilities. With this reduction in the size of our hippocampi- it makes me think about the consequences even more in the future as more applications come out and are normalized in our society. I believe that these products will inevitably become appendages for our minds in that we can’t live or function without them because our brains will have evolved thinking that they will always be in our lives. It could also lead to us becoming lazier, or just our thought processes not being as quick or as involved as they used to be. We give technology the power is has in our society which is one of the most interesting points I don’t think we all really think about in our day to day lives.
- neha Shah
Response to Question 1
I think there are so many things that makes us human, but I agree that there are somethings that really have defined humanity through its course, such as empathy and intelligence. However, both of these ideas are self-made constructs, and therefore can be broken down into its smallest form and basically be put into any artificial intelligence. We saw already that this is happening with Watson, who was not only able to harness information from his database (like our human brain), but he was also able to correctly communicate it at the appropriate time and in the appropriate way. I think that the invention of the singularity will be something that makes us less human, such as so many things already have. Artificial intelligence does not necessarily bring the end of the world, but it may lead to the end of humanity as the race that is all in control. I also do believe that they will be able to feel empathy in place of just emulating it, in the same sense that Watson and other artificial intelligence are able to really answer questions using their algorithims. It is only a matter of time before we are able to break down how empathy works in humans, in the same way that we were able to figure out how intelligence works. I wish there was a way for us to stop so much advancement, but if we were to advocate for that, we may be consumed with our fear of being left behind in this new age of development. So we are always in conflict with the idea of not supporting further creation of artificial intelligence, even if we constantly hear about all the potential dangers and really bad scenarios. We even ourselves in class have come up with the reasoning that it is almost inevitable, so why even bother wasting our time trying to stop it. - Juan Hernandez
Question 3 Response:
I agree with both authors to an extent - I think singularity is inevitable, but not in the way of exponential growth. I don’t think past statistics can point us directly to where our future is headed because I do think many technological advancements between the 1880s and 1960s are far more extreme than a lot of the growth we’re seeing. I think the idea of “lost knowledge” and “repeatability” are things to consider when viewing singularity. For example, going to the moon again sounds a lot easier than it would be to do because of the changes in technology. With faster, lighter, and more modern technologies, each little advancement would have to be taken into account to successfully put another person on the moon. We have yet to accomplish this again. This is also the case with updates on our phones or computers...these take more than just adding a new feature and calling it a day, if this were the case, updates would happen a lot more quickly than they do now. Each addition changes the rest of the technology, which takes time to work through and figure out. Technological advancements look different than they used to, they’re not as groundbreaking. Little by little, I think we will reach singularity, just not as soon and 2030. -Remi Cox
Response to Question 3:
I personally am on Vernor Vinge’s side of view and believe that singularity is going to occur in the future. Although some things will happen sooner than others, over time technology is going to take control of the world. Singularity definitely isn’t something foolish to believe in as we are living in a technological era where advancements are happening so rapidly that even the government has a hard time keeping up. In the 1880s-1960s there definitely was a lot of growth regarding technology with electricity and cars that really impacted the lives of humans. Today, technological advancements are still impacting our lives in different ways and that is continuously being developed. Sure, there are faster computers and smartphones and such but there are also genetically modified foods and other things that improve our lives in other ways. I believe that some things will occur by the 2030s while other things will. I believe the AI Scenario, “ We create superhuman artificial intelligence (AI) in computers” (Vinge, 1), and the Digital Gaia Scenario, “The network of embedded microprocessors becomes sufficiently effective to be considered a superhuman being” (Vinge, 1), will be well present in our world in 2030. The other scenarios that Vinge covers will most likely happen in the distant future eventually. I believe the AI Scenario will happen soon because databases are already reading our behaviors and showing ads that interests us. It is already being developed and still is. Computer databases already can read and predict what you are about to say based on previous behaviors online. As technologists and developers continue to improve the artificial intelligence in computers, computers will be more equipped with knowing human behavior. With the Digital Gaia Scenario, it is very similar to the AI Scenario in terms of having something nonhuman act human. This scenario is also being developed and improved currently. I can see great improvements in these two scenarios in the next decade. To reach singularity, I believe the rapid technological advancement pace would have to continue as it currently is. Technology is the future of the world and many people truly believe that, so it is just a matter of time when technology starts to act like humans as well.
-Vincent Chen
Response to Question 1:
I’m not so sure that the singularity would make us less human. Speaking from a biological standpoint, the only other being that can become human is a human. Once you are human, that can’t be taken away, barring any technological/biological augmentations that might become available in the future. I would support the carefully monitored creation of super-intelligent AI. The problem, it seems to me, doesn’t lie in the concept of AI replacing humans. The problem will more likely be that humans won’t be able to cope with the fact that we are no longer the superior species. If AI of this capacity, those which can think and feel on their own, come into being we won’t need to regard them as human, but rather grant them the same rights that we have. I believe that if these AI are created in our own image, then we can create them with some resemblance of compassion and understanding. We must not, however, hold ourselves above them on the basis that we are their creators. Keeping those two ideas in mind, I believe that a future coexistence between AI and humans would be a great thing. -Brad Lundgren
Answer to Question 3:
I think Alfred Nordmann is sticking his head in the sand when it comes to seeing technological change. I mean, just look at the most recent advances in artificial intelligence. Go to Boston Dynamics twitter page and watch robots open doors and do backflips, and then read about all the things IBM's Watson can do, and tell me we are "far away" from a singularity. Even if we are that far away, this line of thinking is dangerous because we are letting the advances of AI sneak up on us and before we know it they have way more capabilities than we thought we could get. I think a singularity might eventually happen within the century, and if we are ready for it then fine. If we turn a blind eye towards how fast technology is changing in hopes that we will deal with it later, we run the risk of going a step too far and losing control of the very technology we created. We all have seen that movie before, so people like Nordmann need to pay attention now before we end up subjugated by Skynet or something.
Answer to Question 3
I think Vernor Vinge is correct in his assumption that a singularity is inevitable because the only way to go in technology is forward. But I also do think that in some way singularity stems from progress and failures that lead to new and better innovations and isn’t linear or immediate. Moving on to Alfred Nordmann’s point of view about singularity being impossible because the exponential growth of technology is insignificant in this day and age. I think that’s a very bold claim and isn’t really backed up by the facts. If we look at the technological innovations in the last 15-20 years (since the mosaic web browser in 1993 – what we refer to as the internet) we’ve seen technological progress in ways we couldn’t even perceive fathomable during the 1880s-1960s. We have more computing power in our cell phones today than NASA did when we went to the moon in the 1960’s. Simply being able to pull out a device virtually anywhere on the planet and it know where you are and be able to map out a path to where you want to go or be able to have a voice to voice conversation with someone on the other side of the world are technological advances that we don’t fully comprehend or appreciate. I think in today’s age we take technological innovation and advances for granted more than any other time in history (ask yourself or others to simply describe how the modern internet works-something we all use every day, many can’t). I think making the claim that technological innovation is insignificant in today’s world is wildly inaccurate. - Danny
Response to Question 1:
What makes us human has been heavily debated in this class without ever reaching much of a consensus, so it is difficult to predict what a singularity that leads to super-intelligent A.I. will mean for us as a species. A more important question, however, may be what makes A.I. not human. It seems implied that the biological factor is almost irrelevant to a conscious state, with intelligence and neuronal capacity generally being more fundamental to sentience, but what if that isn’t the case at all? Because what A.I. will always lack is the experience of being alive. We’ve debated the potential consciousness of animals, say a dog, and never reached firm conclusions on that either. However, a dog is obviously more conscious of its existence than even the most intelligent super computer on the planet. By equating consciousness with intelligence, it has somehow become more plausible that A.I. could become sentient than other species already having sentience. What makes us human very well could be our impressive brains and computing power, but that’s not what differentiates us from A.I., rather it is our biological, living experience that we share with all organisms that is the obstacle. “You don’t have to be smart to suffer, but you do have to be alive,”- Anil Seth, neuroscientist. –Kaitlyn Michaud
Response to Question 2:
I believe being a technical expert comes down to real-world experience. Things like education can be good indicators of who has potential to be a technical expert, but I personally would place the most trust in whoever had the most direct, tangible understanding of the situation. For example, I believe a good example of this is the difference between a mechanic and an engineer. Most people would probably consider the engineer to be more of a technical expert with regards to the dynamics and mechanical properties of an internal combustion engine, but a mechanic has more hands-on experience working on cars. As to how this dynamic may change in the future, it’s hard to say. A large part of a mechanic’s job is becoming integrated with computer systems, for example diagnosis of problems now requires hooking the ECU up to a laptop. The criteria for being a technical expert when it comes to cars certainly has changed in the past fifty years, and fifty years from now it may be about who can navigate a computer system just as much as it is about mechanical know-how. With the advent of electric and self-driving cars, the job of a mechanic may become entirely automated too. The trend seems to be continually removing the human element from the equation, for better or worse. – Chase Conder
Response to Question 1:
I think this brings up an interesting point, and one that has been reinforced and reoccuring throughout the course. I personally, don't think it has anything to do with our intellegence or ability to learn things that makes us human. I think it has alot to do with the empathy side of things, however pinpointing enthalpy in and of itself I think wouldn't be right either. I think enthapy is part of it, but I think alot of it has to do with our ability to think and process things above or greater than us. Most organisms think on a strictly individual, and if not individual than very closely related, level. Humans on the other hand have the ability to consider their actions and how it could affect humanity as a whole. They have the ability to harness emotion and then act on it. I think emotion in general has alot to do with it and the fact that we can generate raw emotion then act on it is a large part of what makes us human. Although that encompasses empathy, I think suggesting that empathy is only what make us human is a gross injustice to the complexity and beauty of humanity as a whole. - Lake Deane
Question 2 Response:I believe that Mark Zuckerberg is a technical expert. This man was born a genius; he is also very scheming. Zuckerberg attended Harvard University with a scholarship. People were aware of his creativity and intelligence. Because of this, two of his peers scouted him out and requested his help to start a website for Harvard students to communicate. Zuckerberg liked the idea and took it to the next level. He created Facebook. This website he engineered was made for anyone who wanted to connected with those around them. Anyone who knows anything about social media knows that Facebook is thriving and is one of the most popular websites in the world. This is why I consider Mark Zuckerberg a technical expert. On the other hand, the men who told Zuckerberg their idea did not feel the same way. They sued him for stealing their idea, but Zuckerberg used his scheming skills to overcome that barrier. Today, he is one of the wealthiest people in the world through his website. I also call Zuckerberg a technical expert because he found a way to compel people all over the world to give him and his company the majority of their information and all of their data without them realizing just how much they were giving away. -Tyler Baylor
Question 1:
I do not believe that an elevated level of intelligence in future technology makes us less human. We are human for our intelligence, but we are also human for our mistakes and the knowledge we don’t know. That is why humans are always so curious to learn more. We are human because we accept the things we do not know and our mistakes, and try to learn new things to improve our performance. Super-intelligent technology begins super intelligent, and does learn more as more data accumulates, but mistakes are not supposed to be accepted in super-intelligent technology. -Nina Vernacchio