AnimeGalleries [dot] NetAnimeWallpapers [dot] ComAnimeLyrics [dot] ComAnimePedia [dot] ComAnimeGlobe [dot] Com


Westworld Philosophy

  1. Tycke
    Tycke
    This thread is a continuation of conversation between @Wio and I in my "Ask me (almost) anything!" blog post linked here: https://www.animeforum.com/entry.php...hing!#comments
  2. Tycke
    Tycke
    "The show had a lot of messages, some of which are obvious and some of which are more subtle. Since Ford is a character that is very intelligent and has an authority to him (similar to a wise sage archetype), I got the impression that some of the things he says are coming directly from the writers' hearts. At the end of season one Ford gives a speech about how he likes stories and hoped that they could "help us become the people we dreamed of being", but gave up on that hope because humans "don't want to change, or cannot change, cause your only human after all". My interpretation of this is that the writers are progressives (living in a very progressive bubble in California), and they wish to use stories to promote their values to the general public; however, it hasn't gone as quickly or smoothly as they had hoped which has made them misanthropic and pessimistic. I know it's presumptuous, but there are lots of little things that give me this impression."
  3. Tycke
    Tycke
    I used to follow the line of logic that the creators of this show follow from your point of view. I still have a small part of me that thinks the world would be better off if we placed it in the cold and deliberate hands of AI and let humans die off.

    However, recently I've been thinking more about how that thought can only come to mind because I have human intelligence. If deer started overpopulating the world would they stop their actions to consider the consequences on the environment around them? Not any more than natural selection and the decline of resources would allow them to. Humans, on the other hand, while they can be as stupid as natural animals, more often than not think about the future and the affect of its' actions on its' children. At least, this is what I like to believe. At the same time, we have one foot still firmly in the natural world, so we sympathize with creatures who follow their natural instincts.
  4. Tycke
    Tycke
    I think AI will be great at preparing for future possibilities/consequences but will lack the natural empathy to allow living things to behave in ways that are natural to it and in that AI wouldn't feel any restraint killing both humans, animals, plants, anything really, to meet what they believe is the "perfect" end result.
  5. Wio
    Wio
    I've been thinking about a couple of things. The first is if we could change human nature, would we be able to do it without catastrophically dangerous consequences. I look at the cultural changes that happened in the 20th century and even they have had serious consequences that bother me. I bring this up because the AI is said to have a better nature than humans in some way, but it's not spelled out explicitly how or why. It's easy to say people should be less greedy or more thoughtful, but we aren't familiar with the downsides of tweaking those traits.
  6. Wio
    Wio
    The second is that I consider the reversibility of robot injuries makes their injuries less harsh than the human equivalent. Kill a human and they're gone for good, but "kill" a robot and they can be revived so long as the memory core isn't damaged. Also, traumatic memories can also be erased and injuries are easily healed. We might gauge their suffering by comparing it to human suffering, but that's intuition is based damage and memory persistence which doesn't apply to robots.
  7. Tycke
    Tycke
    I've been thinking about human "greed" and thoughtfulness a lot lately - I don't think greed is inherently bad. In fact, greed, to some extent, is what kept us flourishing as a species. I can't say I can think of too many other sentient beings that have a sense of greed like humans do. Like most things, it's a dual edged sword - too much greed can lead to over-abundance, laziness, and an over-inflated sense of self-worth, among other negative things. However, it can also lead to a very strong will/ability to provide for kin/companions and ambition in general which I think are good traits. I also think that being overly thoughtful has its downsides - it leads to otherwise fit potential parents to not have children or to be overly cautious and not take necessary risks. So I definitely agree that we don't understand the downsides of altering these traits that are not actually "bad". Of course, some people may think otherwise.
  8. Tycke
    Tycke
    I also agree about the damage. I believe that immortality is anti-life. Living things die, robots do not. Living things, therefore, consider their place in the chain of life and death that a non-living thing would or could consider. And while humans may take life for granted sometimes there is almost always a point where we realize our own mortality and start to respect life in general. I think it would take a very special non-living thing to consider its actions and the consequences on living beings - like you said, it could be reckless and be regenerated no problem. That's also why I think AI copies of ourselves, which is very likely in the future, is not a good thing - what's to keep them from killing the real person behind their personality? In fact, I can see a lot of benefits to why they would kill the real person who made up their personality - why would they want the risk of having a true copy of themselves that could commit crimes posing as them?
Results 1 to 8 of 8