Skip to main content

Posts

Elon Musk invested early in DeepMind just to keep tabs on the progress of AI

Per this TechCrunch article , Elon Musk... was in on the AI train early with an investment in DeepMind, which was later acquired by Google. Musk wasn’t in DeepMind for a return, as is the case with most investments; he wanted access to greater insight regarding DeepMind’s progress, and the progress of AI in general... The enterprising CEO wanted to be able to see how fast AI was improving, and what he found was a rate of gains that he hadn’t expected, and that he though most people would not possibly expect. As TechCrunch points out Musk’s anxieties around AI are considered extreme by some of his Silicon Valley peers, but the man definitely seems to have a knack for long-term preparedness planning. As covered in my last course on Bostromian Philosophy, Musk has shared that his understanding of AI and its status as an existential threat is heavily influenced by Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies"
Recent posts

A brief history of tomorrow

If you found my class interesting at all, you have to check out this a16z episode, "Brains, Bodies, Minds... And techno-religions": https://a16z.com/2017/02/23/yuval-harari-from-homo-sapiens-to-homo-deus/ Evolution and technology have allowed our human species to manipulate the physical environment around us — reshaping fields into cities, redirecting rivers to irrigate farms, domesticating wild animals into captive food sources, conquering disease. But now, we’re turning that “innovative gaze” inwards: which means the main products of the 21st century will be bodies, brains, and minds. Or so argues Yuval Harari, author of the bestselling book  Sapiens: A Brief History of Mankind  and of the  new book   Homo Deus: A Brief History of Tomorrow ,   in this episode of the a16z Podcast. What happens when our body parts no longer have to be physically co-located? When Big Brother — whether government or corporation — not only knows everything about us,  but can make

A method for "locking up" a computer super intelligence

 interesting but technical blog post https://iamtrask.github.io/2017/03/17/safe-ai/ " TLDR:  In this blogpost, we're going to train a neural network that is fully encrypted during training (trained on unencrypted data). The result will be a neural network with two beneficial properties. First, the neural network's intelligence is protected from those who might want to steal it, allowing valuable AIs to be trained in insecure environments without risking theft of their intelligence. Secondly, the network can  only make encrypted predictions (which presumably have no impact on the outside world because the outside world cannot understand the predictions without a secret key). This creates a valuable power imbalance between a user and a superintelligence. If the AI is homomorphically encrypted, then from it's perspective,  the entire outside world is also homomorphically encrypted . A human controls the secret key and has the option to either unlock the AI itself (relea

Companies working on human enhancement

I thought this was really interesting: a podcast on companies that see their mission as enhancing humans This cracks me up: philosophy has just started to talk about whether this is right or wrong, and in the meantime, people are doing it anyway. These companies are: Nootrobox - drugs (more or less) to make you better (or, so they say) - this fits in really well with the discussion we had about if ADD meds or depression meds count as human enhancement Halo - Using a targeted electrical current to the brain to improve learning speeds Soylent - replacing food with something healthier, cheaper, and more efficient (not really human enhancement, but some have caffeine and l-theanine, so there's the drug connection)  Apeel - preserving produce longer (not sure how they ended up on the panel) Summary: Humans have always wanted to enhance themselves — from getting nutrition just-right to optimizing their performance, whether in sports or health or work. And food is a

To Mars

We could totally go to Mars, we just need to demand our governments prioritize space travel over expenses like killing brown people. But, since that seems to be impossible, we can support  companies that are doing it on the chance they might get rich . The company in that article, SpaceX, makes rockets for NASA, the military, and other countries. It's owned by the same guy who owns Tesla, Elon Musk... Who recommends everyone read Nick Bostrom. He thought we needed to get off Earth to reduce the risk of extinction, so he made a company to do that; he was worried about climate change, so he made Tesla, with the plan of making the internal combustion engine irrelevant in a few decades; he also made Solar City, to get more people to use solar power. What I'm getting at is, he saw problems in the world, and they scared him, but he did something about it. His plans are working, too, because he is smart and hard-working. I think you all have the potential to impact the world like

Lesson 6

That was too cool not to show you. Anyway, today I want to talk about any thoughts you have about any relationships between things we've talked about so far: Where are they "Why I hope the search for ET life finds nothing" existential threats to humanity What is AI? The effects of AI and other technological changes on the human condition Ontology and Epistemology (What is and what can be known) Ethical systems like Utilitarianism I'd like to go more in depth this week on rule-based ethical systems (deontological ethics), then talk about the ethics of human enhancement. MOST STILL TO COME(Answer to the 2010 Edge Question “How is the Internet changing the way you think?]Nick Bostrom The Ethics of Human Engineering PS This company tried to graph out what happens inside neural nets ... the results look a lot like life. It's pretty shocking(ly cool): "Many of the images created by Graphcore, which are technically graphs, are b