Michael Vassar of The Singularity Institute – #14
By: Dave Asprey
Michael Vassar is a futurist, activist, entrepreneur, and the president of the Singularity Institute. He advocates safe development of new technologies for the benefit of mankind. He has held positions with Aon, the Peace Corps, and the National Institute of Standards and Technology. He was a Founder and Chief Strategist at SirGroovy.com, an online music licensing firm. He has co-authored papers on the risks of advanced molecular manufacturing with Robert Freitas, and has written “Corporate Cornucopia” for the Center for Responsible Nanotechnology Task Force.
Michael joins us to talk about The Singularity Institute and about how you can use rational thought to take advantage of modern technology without undue risk. He will also share his thoughts on both “enlightenment science” and “scholarly science”, both of which you will learn about in the interview. If you are curious about how to balance the benefits of technology with the drawbacks for maximum performance, this show is for you.
Did you know you can get a free PDF download of every transcript of every episode of Bulletproof Executive Radio by entering your email address in the box on the right side of this page? You also get a free copy of the Bulletproof Diet, the Bulletproof Shopping Guide, and much more.
What We Cover
- What the Singularity Institute is.
- What would happen if an “intelligence explosion” occurred?
- What are the differences and similarities between “Enlightenment Science” and “Scholarly Science.” Are they mutually exclusive?
- Have there been any cases of machines becoming self aware or too smart?
- How do you balance pushing the limits of new technology with restrictions on what engineers and technologists can do?
- Do you ever see the machines and computers used by the military doing damage to the population?
- What are the biggest misconceptions about The Singularity Institute?
- What is your favorite piece of technology you use on daily basis?
- What are the top three biggest challenges faced by The Singularity Institute?
- What is the Singularity Summit?
- Where you can go to learn more about The Singularity Institute or LessWrong.com.
Links From The Show
Food & Supplements
Whey Protein Isolate (Grass-fed)
Don’t forget to leave a ranking in iTunes. It helps more people find our show.
Dave: You just dug into that delicious bowl of ice cream when it strikes, a sharp stabbing pain in the middle of your skull. Thirty seconds later, it’s gone. You’ve just been hit with brain freeze.
Brain freeze happens when something really cold touches the soft palate on the roof of your mouth, which causes your blood vessels to suddenly constrict. That means that warm blood starts to flow through them as soon as they warm up which causes dilation which causes the receptors to send a pain signal to your brain.
The message then goes through the nerve that is responsible for feeling in your face so your brain thinks that there is pain coming from your forehead which is what causes that brief, but intense headache sensation. The cool thing is you can turn it off by drinking some warm or hot water, it turns off almost instantly.
Dave: You are listening to episode 14 of Upgraded Self Radio with Dave from the Bullet Proof Executive blog.
Dave: We have a great interview today with Michael Vasser, the president of the Singularity Institute. He joins us today to talk about how you can harness critical thinking to overcome problems in all parts of your life.
Co-host: Now, we’re going to move on to our exclusive interview with Michael Vasser, the president of the Singularity Institute.
Dave: Michael Vasser is a futurist, activist, entrepreneur and the president of the Singularity Institute. He advocates safe development of new technologies for the benefit of mankind. He’s held positions with Aon, the Peace Corp and the National Institute of Standards and Technology. He was also a founder and chief strategist at SirGroovy.com, an online music licensing firm. He’s co-authored papers on the risks of advanced molecular manufacturing with Robert Freitas and has written corporate cornucopia for the Center of Responsible Nano Technology task force.
Michael joins us to talk about the Singularity Institute and how you can use rational thought to take advantage of modern technology without undue risk. He’ll also share his thoughts on both enlightenment science and scholarly science, both of which you’ll learn about in the interview.
If you’re curious about how to balance the benefits of technology say like bio hacking with the drawbacks for maximum performance, this show is the one for you.
Michael, welcome to the show.
Michael: Hey, Dave, nice to meet you.
Dave: Michael, tell us about the Singularity Institute.
Michael: Okay, the Singularity Institute was founded over a decade ago by Eliezer Yudkowsky in response to concerns about the risks for artificial intelligence, which [inaudible 00:02:28] had been writing on [inaudible 00:02:31] magazine about very recently.
Kurzweil wrote a book in 1999 called Age of Spiritual Machines, and this book is a very good introduction to the basic of way of thinking that it’s possible [inaudible 00:02:48]. It’s about as third of as long as the Singularity [inaudible 00:02:51] if you don’t want to send 700 pages, it’s a good way to go. There’s another book coming out pretty soon, the Singularity Hypothesis, from Oxford University Press and the [inaudible 00:03:20] Institute that should also be worth looking at it.
The Singularity Institute was founded because of these concerns, and Eliot [inaudible 00:03:12] basically started looking into how to develop artificial intelligence that would not automatically destroy you if made it work. This like the first struggle people think about when they think about artificial intelligence, and it sounds like a really stupid stereotype, anthromorphizing problem, because you shouldn’t say if something’s really a machine, it wouldn’t really have ways to exert itself or dominance and any of these other motivations that cause between humans for the most part, but really these sort of drives, et cetera, are produced by the impairment in response to actual features of the environment such as there being finite resources and different goals that can compete to consume those resources. There’s a level at which it sounds obvious and a level at which it is obvisously anthromorphic and then another level at which — no, it’s obvious actually that building machine intelligence is extremely dangerous.
Yudowsky spend about five years working on this by himself and then he brought in some more people to work on it, and then some of this people built a conference and then I extended the conference so they both could go set up side conferences. Along the way, singularity became much greater known and the pool of people who were focused on this particular issue became pretty depleted, but the pool of people involved in developing and promoting of really advanced technologies they could transform the world in one way or another, continued to grow, and we invited over 100 of them at this point to speak at Singularity Summits.
We also decided we need to recruit more people who could think about the sort of strategic issues we’re talking about in a rigorous, careful manner. We increased — we built a rationality for the critical thinking training community called Life’s Wrong, which we did in conjunction with the Future of Humanity Institute, which has a set of interwoven host called the Life’s Wrong Sequences, which are extremely good for taking people from a surging, already high level of critical rigor to a level of critical rigor that is actually adequate to make progress on difficult questions where our emotions tend to be strong and [inaudible 00:05:59] tends to be weak.
Dave: You’re basically teaching people how to think which is a precursor to being able to intelligently decide how to deploy artificial intelligence?
Michael: Yeah, or intelligently deal with any serious events, and do anything that has high stakes really. In a sense, we teach people more to think more rigorously than they had previously been able to so that it’s possibly to think effectively about how high stake situations, which don’t have strong, pre-existing, strong precedence for.
Dave: I think that’s a noble goal, and one that would befit a lot of people who are looking at upgrading themselves with some of the technologies we talk about on the show. I mean I’ve used lasers and infrared LEDs and electrical currents across my brain, and I have for more than 10 years, but I like to think I’ve been doing it with a reasonable degree of safety, but it’s certainly high stakes, you don’t want to fry your brain.
This ability to look at risk from a rational perspective versus a fear based perspective, I think is fundamentally important. What would happen today if what you call an intelligence explosion occurred? What would it mean for people listening to the show?
Michael: I think it’s fairly difficult to talk about what happens if an intelligence explosion occurs. To a really significant degree, what we mean by intelligence at the Singularity Institute, we can also call, if we want to be more technical, optimization power, the tendency to cause things to be one way rather than another.
The thing is that people think in stories. There’s no story here, everything just ends. It really isn’t a scenario like the Matrix or Terminator. It’s not scenario like Hell or whatever. It really just system modified themselves and other systems in a way that doesn’t stop, and the world isn’t there anymore, very, very simple.
It’s probably better to talk about small [inaudible 00:08:21] risks, ones that are more concrete and just think about intelligence explosions by analogy for the basic problem, okay? Does that make sense?
Dave: I think that makes sense. Can you tell me the difference between say an artificial intelligence explosion and say smarter rats that suddenly evolve and aren’t artificial but are intelligent?
Michael: That’s a great idea. I think for smarter rats would be extraordinarily dangerous think and to some degree, those can be made already. There’s a breed of rat called Harpy-J which was produced with [inaudible 00:09:00] lab rat, and increasing the activity of one of the neural transmitter receptors, I think in order to be, and it’s strength which is a glutamate transporter, and this greatly improved it’s memory and increased it’s speed at unlearning as well as learning, increased a number of other cognitive processes which are measured as well as people can measure rat’s intelligence. It’s a derivative of a similar work that had been done on a mouse about seven years ago.
Anyway, that is not smart enough to be really threatening. Rats aren’t starting from that baseline, but you can very easily imagine is the amount of trouble rats have caused over the years with the level of cognitive traits that they have, but if they became a lot more intelligent could be very threatening.
Now, if you talk about rats being superhumanly intelligent, once again, I don’t think it’s really a matter of threatening. I think we would just be fucked. Rats can increase in population by an order of magnitude every year fairly easily, and they are much smaller than we are. They are highly adaptable. It doesn’t really seem at all realistic that it would be possible to rule it out, a population of rats that could be orders of [inaudible 00:10:35] smarter than us, could probably utilize our technologies for their own purposes. I would expect — I would basically see no possibility at all that that could actually work out well for humanity.
Dave: The Singularity Institutes looks at intelligence explosions say above biological, other species, as well artificial, sort of looking at planning for both scenarios?
Michael: I don’t many people actually expect a biological intelligence explosion. A machine intelligence explosion seems a lot more likely for a variety of reasons.
First of all, the situation where I was talking about with rats, would not be intelligence explosion. We would just be fucked from smart rats even if they didn’t explode. I mean even if they didn’t self modify themselves and become recursively smarter and repeat itself.
Dave: That’s the difference. You’re looking for a recursively smarter things with this ability?
Michael: Well, the thing that we tend to be focused on in terms of our research, I’m interested in global catastrophic risks whether it involves intelligence or not. Probably the easiest way for the world right now as far as I can tell, would be something like build a bacterium with all the [inaudible 00:12:01] that is three dimensional, mirror image, symmetrical molecules reversed in their triality. If this bacterium was autotrophic, if it absorbed carbon from it’s amosphere rather than from living things and didn’t basically need enzymes to survive, it would be able to reproduce even without food that it could eat, but nothing would be able to eat it, because it would not be reactive with the enzymes of the other organism. Nobody expects that sort of thing could expand in population unchecked and produce a ridiculous [inaudible 00:12:38] disaster.
Dave: The biological equivalent of gray goo.
Michael: Yes, well, gray goo is really difficult to make and reverse [triavo 00:12:49] bacterium, would okay — Gray goo would be difficult to make the way — to a point where I really can’t see any possible way in which it could exist in the next decade while reverse triavo bacteria are something people could basically make now.
Dave: That’s probably the top of mind’s biological directives are at that you see right now? Let’s take a step back and talk about the differences and similarities between enlightenment sciences and scholarly science, and look at are they mutually exclusive?
Michael: Sure. This is an idea I came up with a couple years ago. The basic thought is that the thing that — Okay, one thing that became obvious when I started running the Summit is that people were doing really radical and evolutionary science are able to attract a fair amount of attention, but often they rarely establish themselves as credible in sense of acquiring good PhDs, academic posts, et cetera. They seem to have an extraordinarily great level of difficulty attracting funds to their work, getting other people to take up similar line so research, et cetera.
Looking at what the scientific community does, almost all of what it does seems to be basically repetition of other experiments without any really relevant changes, just very slight variations not really intended to provide any new information. The simplest way to put that is almost all of what the scientific community does and almost all of what receives funding seems to be basically fiddling around with lab equipment not doing science.
As I became more aware of that, I was thinking more about what we mean by science so how this could come about. One of the things that occurred to me is that people will have differing opinions about science or about any concept if they generalize that concept from different data. One set of data that someone could possibly generalize a concept and call that concept science from is science fiction movies. Another set of data that someone could possibly generalize science from is doing lab work in an undergraduate and then a graduate setting on someone’s project where the textbook tells you effectively what you’re suppose to see and then you claim to see it.
Now, these are both not what I call science. Controllably, more people generalize their concept of science in the first of those things and the second, then there’s a third thing which is the things that are talked about in the history of science. That’s the third thing you could generalize the concept of science from is that the thing that Newton and Archimedes and Frances Bacon and Galileo originated, and that every entry or every other entry prior to the 1950s or 1940s in any given history of science will be exclusively about.
There’s a talk on the subject that I gave at the Singularity Summit in 2010 that might be more clear, because it is descriptive, but I would like to know how this coming through.
Dave: I think I’m getting it. When you mention enlightenment you are actually talking about the enlightenment period of history versus the more modern things, and back them people were very experimentalist focused, and I observe something so let me measure it and figure out what’s going on versus more modern things where we expect to observe something so we measure it again and make sure it is what we thought it was.
Michael: That seems like a good way of putting it. I ended up creating this theory or model where there are really like a half dozen different things that someone might mean by the word science, and those things have some features in common, but not very much. Most of those things have always existed, like a really cool question is typically today, educated people use the word science and use the word investigation of how the world works and figuring out what’s true and how to do things as basically synonyms.
If these things are properly synonyms, then you have to say that people had science a thousand years ago when they built cathedrals and they had science two thousand years ago when they built aqueducts, et cetera. In that case, what is this thing that people were talking about the scientific revolution in the 17th century? It can’t be the origin of science, because people have been figuring stuff out and using their knowledge to do stuff forever.
If you try to figure out what that thing is, you have something like [inaudible 00:18:18] vision of science where you go out, look around, notice things that surprise you, try to measure them, try to come up with theories about basically mechanical causes for those things, try to develop equations that predict what exactly you’re going to see and then test. That way of doing things was not by itself sufficient to cause the scientific revolution. [Inaudible 00:18:53] earlier components were already in place.
There was something that I called scholarship which was populant practically every culture ever but not in our own. There was this thing enlightenment science that was described and that never lead us anywhere before the 18th century, though you had precursors like Archimedes in Greece much earlier.
Then you had a number of other things like exploratory data gathering or lab technician work or mechanical engineering and broadly speaking, pressman ship that were already ubiquitous before the 17th century, but by themselves did not produce this sort of revolution in knowledge.
Dave: It sounds to me almost like Neil Stevenson had this stuff right when he wrote his theories starting with Quick Silver about the formation of science back in the time. He was thinking about this, and I’ve certainly been reading some other things about that and I’ve often wondered the same thing. How did we get to the point where some people talk about science with a capital S and it’s a dogmatic believe versus someone who as an inquistive belief.
I find that as a bio hacker, I use to way 300 pounds and the dogmatic Science would say you eat less calories and eat sticks and twigs and you lose weight, but it didn’t work, based on scientific measurements of what did I put in, what’s coming out, it’s not working.
I changed my approach and experimented and came up with a different thing that works which includes a stick of butter a day.
Is that an example of going from–?
Michael: I think so. Well, yes, Richard Fenyman says in one of his quotes that science is the disbelief in the authority of extras. That is reasonably close to I think the motto of enlightenment science, go out look around, see what seems to be true and then check yourself, be really sure, try to figure out how things would be if it wasn’t true.
There you’ve got a good example. We have an extremely questionable set of claims that have sort of the official warrant of the scientific community in so far as anything, and the official came out like as far as the university system says anything official about health and nutrition. It has this set of claims that seem to be extremely questionable.
It seems obvious at this point, that all scientists and enthusiast of science like the skeptical movement or whatever, they talk about how science seeks to correct it’s errors, et cetera. The nutritional field for instance, very clearly does not seek to do so. It’s not … It’s guidelines do not seem responsive to new data in any straightforward manner, and in particular, it’s guidelines are totally not responsive to surprising really unexpected data.
One way of thinking about this that I’ve had some success with is if it uses statistics — if it uses statistics, and it used statistics a generation ago, it’s not science.
Dave: I love that quote.
Michael: Because really, although statistics is incredibly useful for generating and evaluating scientific claims, it’s also incredibly easy to automate and turn into a proforma and bureaucracies when they say a way of formalizing what you’re doing, tend to insist on it, and then things they generate into a contest to junk out more — to pump out more and more bad statistics, because no one is actually listening at anything other than whether you went to it or shouldn’t before.
Another point about statistics is that permits shouldn’t really bad practices from an informational perspective. What do you call someone who drops the outliers?
Dave: A good marketer?
Michael: In a good lab? Outliers out. The idea that you should drop outliers is basically the exact opposite of the idea that you should really pay attention to outliers which inspire things like the observations of the preservations of Mercury and the need to use general relatively to find it. Science mostly works by looking for outliers and then trying to figure out new theories that actually predict them so dropping outliers is almost the opposite of that. Yeah, it can become entrenched as a standard practice.
This is the sort of thing that I’m talking about when I say basically most — I mean we have all of these data, all of these papers like [inaudible 00:24:23] published findings of force.
We’re stuck with a situation where people have to do like this, their own research in order to solve problems for an N of 1, which in itself is really problematical. It would be much better to have a community of people doing the sort of thing you’re talking about, systematically together rather than unsystematically organizing through web forum, but this is what we’ve got.
Dave: I think there’s some neat stuff happening particularly with Eri Gentry’s work around having open science and the Cure Together website. We’re starting to apply those types of statistical techniques to getting enough data from citizen scientists or bio hackers like me. Do you think that’s enough or do we need to push this into universities?
Michael: Well, I tend to think that we don’t need to push it into universities, because that won’t work. I mean when I think about the enlightenment science and I want to be really cynical, I would say they expanded and succeeded and grew by breaking with the universities. Then once they accumulated enough status the universities beg them to come off and promise to change, they went back and the universities didn’t actually chance.
I think what’s important about the structure of universities for [inaudible 00:25:44] long before Adam Smith thanks to charge, but basically complained in lots of ways that are still relevant today.
Co-host: Speaking of statistics, let’s talk a little bit more about the sexier part of your research about self aware machines. Have any actually happened yet? Have any machines or computers become self aware yet?
Michael: Self aware is a really vague term. You can totally build — you probably in fact in your car a computer that takes information from it’s environment, and uses some simple models, and registers information in that model about a variety of systems including one of the systems being the computer that’s doing these analyses.
If you mean simply having in some sense a compressed representation of oneself, that’s been around forever. If you mean having subjective consciousness similar to that which humans have, then it’s becomes a borderline unscientific question. If one doesn’t have a way of probing and figuring out whether something is — There’s always philosophical issues about whether other people are even conscious or whether I was conscious two minutes ago. I could argue possibly for one theory of consciousness or another, but I don’t think we should do that on this show right now.
To me, the really striking thing about today’s AI is that if you were to look at something called AIXI built by a guy named Marcus Hutter, which is an abstract theoretical model, you can prove that this system is given certain unrealistic assumptions, ideal for certain function, and then it turned out to be possible to look at that abstract system and come up with a practical system or semi-practical system that wasn’t constrained in those ways and which showed general intelligence that AIXI Monte Carlo Simulation, which basically learned to play Pac Man on it’s own with no external information using a fully general reasoning process, and fully general in a sense that humans are not really.
We don’t however — that system does not represent itself. It’s definitively not — the system is definitely not self aware. It doesn’t have any data that corresponds to itself internally. Was that helpful or not?
Co-host: Absolutely. It makes perfect sense. When should people actually expect something like that to occur? Will be in the next 20 years? The next 100 years? When do you think it would be possible for machines to come to the point where they are more detrimental than they are helpful in terms of their intelligence?
Michael: Okay, let’s separate these issues. I don’t expect machines to become more detrimental rather than helpful. I expect machines to get better and more useful and better and more useful and better and more useful and people keep on improving them in various ways. Then through everyone’s death. It’s not like it’s going to be a shift of machines gradually become resentful and wake up and arise and destroy their human followers.
It’s more like there’s going to be a shift and a certain expected value calculation that is being performed by the registers of 500 million different subsystems spread over the economy, over a two day period, versus a response to a news item about the spotted owl or whatever. Registers that in light of an update to some belief system, inspired by this news article or whatever, it seems that the way to make happy, that of the options available for making people happy, doing what the say, is no longer as promising as trying their DNA into smiling faces and because the way in which you interpreted happy involves smiley faces and human DNA.
Dave: That’s actually great quote. I really like that, the idea of measuring as something. In fact, we have a blog post coming up on that now that applying scientific principles to happiness versus to genetically hacking it are just sort of different principles. I think that scenario where science has really struggled historically, because it’s really hard to quantify emotions in a real time scenario particularly especially when more than one is happening at the same time. It’s eluded.
Michael: It’s hard to quantify emotions. It’s even harder to quantify what people want and we don’t just want of any of it. Fundamentally, the problem here is the what people want problem. This is a problem where we don’t know how to ask for what we want. We’ve kind of been making fun of ourselves for this for thousands of years in fairy tales, whether it’s the shoemaker and his wife or King Midas and his golden touch.
If someone is not Aladdin, it’s wishes in a story, they’re going to regret it.
Dave: That’s maybe a built in part of the human condition. At least if you read all those traditional stories. How do we apply that to say more modern engineering technologists and how do you balance pushing the limits of new technology with the way you restrict those guys? If you assume your guy is well funded in a lab and you’re trying to solve a problem, you may actually get what you’re asking for even if it’s not what you really wanted.
Michael: Right, that’s correct.
Dave: What is the mechanism that you advocate for allowing those guys to innovate, but without killing us all?
Michael: I’m extremely skeptical of most policy type mechanisms with the sort of policy systems we have in place right now. The system that I think I would advocate is more like the system that I think you guys would advocate for weight loss, which is to say it looks to me like science — it looks to me like the intellectual arguments for being careful and responsible and trying really hard to be really cautious about certain advance technology, the intellectual arguments are actually quite easy.
Everyone who can make really cutting edge scientific discoveries has more than enough brain power to understand these arguments, and if they don’t it’s either because they have a deficient belief of how to process arguments or because they’re distracted by other emotional bullshit which wants to believe a certain things rather than wanting to believe whatever is true. The emotion pulls generally speaking seems solvable by people becoming more self aware and more satisfied in their lives, while the reasoning pulls seam solvable by simply teaching people how reasoning can work, which is field which is closely related to artificial intelligence anyway and to statistics, et cetera.
The sciences who are most promising are likely to already have a lot of the prerequisites.
Dave: It seems like historically this hasn’t worked when companies get involved. As an example, 25 year ago, DuPont introduced a fungicide called Benomil which was really effective in that it killed maybe 98% of all fungus, but the other 2% had massive mutation in directions that would never happen naturally, and we’re actually dealing with that from a health perspective now. This is not well known, but the scientists at DuPont and in the regulatory side of things, actually knew that this would cause massive mutation, but at the same time there was this press to do it. We mutated some sort of X-men series of mutated molds.
I feel like this sort of thing can continue to happen, where you get a profit mode of mixed with this thing, and what’s going to happen there without any kind of regulatory framework on these companies? Not that I’m a fan of any current regulatory frameworks, they don’t work and they are gameable and they are purchasable in most countries. Is there really a way out of that?
Michael: I actually think there is. I think that a lot of it has to do — my solution involves to a really significant degree, and I say unpacking this word scientists which is what I was talking about before. To a significant degree, bushiness and government tend to locate at science, scientists as being something like a commodity. You know 500 pounds of grain and three barrels of oil and two research papers, and that doesn’t actually seem to match very well to the process that generates long term technological and infactual process.
What I basically think, Dave, you probably noticed that some scientists are overweight and you’ve also kind of noticed that you can use science to not be overweight. There is no very good excuse for that, right?
Dave: You’re saying don’t trust a fat scientist? Is that what I’m hearing?
Michael: No, nothing about fat scientists. Not quite. What I’m saying is help scientist to lose weight, but it’s not just lose weight. It’s the general thing of encourage the development of a community of people who are using very rigorous methods collectively to improve their lives.
Right now, you have people like yourself or Kim Ferris doing work in basically a vacuum. You don’t publish online all of your data. You don’t seek outside collaboration. You don’t have peers in any serious sense as far as I can tell, and you still manage to produce results that out perform whole scientific communities by actually using science when they’re not.
My model tends to be this, whatever you are doing, the best scientists in most fields could learn to in that not long a period of time. They don’t need to learn to how to do it just themselves, they also need to collaborate effectively and form communities, but the communities do not need to be very large. If there were a hundred people doing vaguely what you’re doing, sharing this information, discussing it with one another, with teams of fairly neutral people trying out the things that they were all doing, and recording results in a common standardized format that was designed with a good modern knowledge of how information works, it doesn’t seem like it should take terribly long to figure out [inaudible 00:37:54] solutions to a variety of health and wellness problems then to a variety of a intrapersonal, interpersonal, economic, whatever sort of problems that people want.
My basic attitude is that a person’s life — it took a certain amount of engineering work to figure out how to go to from Model T to ’57 Chevy whatever, okay? It took a certain amount of work and we can quantify how much engineering work went into that. The gap between the Model T and whatever the best car in 1908 or 1914 was, it justs a joke compared to the [inaudible 00:38:42] Model T and the ’57 Chevy. It basically seems to be that the gap between one person’s life and another person’s life today is sort of a joke compared to the sorts of lives that people build for themselves if there were similar [inaudible 00:38:56] engineering and design work going into analytic rigorous lifestyle optimization.
Essentially, my solution to irresponsible profit driven progress is basically create tools that people who want to use, can use, that many or most smart people will use that allow them to solve all their personal problems so that they’re not irresponsible, not barring self willy nilly blind, because they’re not actually motivated by these profits they’re being offered. They already have all the money they want, because are basically choosing life that they want.
If you look at how difficult it is to make a major new discovery, and if you look at how difficult it us to become rich, it’s just a joke. The problem is not the scientists are [inaudible 00:40:05] cheats in a sense.
Dave: I hear what you’re saying there, and it’s interesting to me that a good number of very wealthy people at some point reach the oh, I’d like to get into life extension sort of thing. You have a good number of them who have done the Al Gore freeze your head or freeze your body so you can hopefully be revived later. You also have even guys like Larry Elson who is kind of well known for really being into health and life extension, anti-aging.
I’ve run an anti-aging nonprofit that’s been around for almost twenty years in Silicon Valley for the last four or five years, I’ve been the chairman or a board member, president. It’s call the Silicone Valley Health Institute now, and it use to be called the Smart Life Forum. It seems to me like most of the time, someone out there who’s spent their life focusing on a specific aging problem or a health problem, already has an answer.
I’m not talking about genetic things just being discovered, but generally oh, you’ve got cancer? Here’s five different ways that you can fix it, but unfortunately, none of them are accepted by dogmatic Science to the point that I’ve talked with the guys that give IVs and have medical degrees that are under attack from regulatory authorities. I have a pretty high degree of confidence that they are not just crazy people, and that they are actually doing what they say they’re doing.
There’s something else going on there. This isn’t scientist, bio hacker guys like Tim Farris and me and really hundreds of others in the quantified self of it now, but getting the data out there and getting it–
Michael: The quantified self movement, I’m going to challenge that.
Dave: Oh, cool.
Michael: I’m going to say that there are citizen scientists like you and Tim Farris and Seth Roberts, and that’s almost it. There are a lot of people who are kind of dabbling, et cetera, but if you look at the absolute quantity of output, those three people probably out exceed everyone else combined, and then the next ten people also exceed everyone else combined.
You’ve got a really, really small number of people doing this rigorously and well and in large quantities. What I’m sort of suggesting is if we had twenty MIT and Harvard professors doing this in a kind of club or whatever, forum, doing what you and Tim and I guess to some degree Seth do. If you had twenty MIT and Harvard professors doing that, they would probably get a lot farther than you guys have quite quickly, but even more so, they would create a strong signal, because one person is an anecdote, a bunch of people doing it something systematically is data.
Once they had this strong signal, this could pull in a lot more people from their local communities and pretty soon lots of the professors at MIT and Harvard would be doing this and everyone could see how much they better look and more physically dominate and self satisfied in various ways, and how much more energy they had for work.
Dave: I think that’s happening to a certain extent. Steve [inaudible 00:43:25] who’s one of the early guys who was involved in the creation of Mathmatica, and who is a relative well known AI researcher has been on a —
Michael: I know him.
Dave: You know Steve, okay. You know he’s been on the bullet proof diet before for a little while and I saw him three months after he went on it, and he said, “Dave, I just lost 50 pounds and I haven’t exercised at all and I love this bullet proof coffee thing.” People are noticing that, because he’s a great guy, and he’s well known in these circles especially around self aware machines.
I kind of think that just leading by example and some of the reasons why even the bullet proof executive blog exists is that I was a 300 pound guy who was having cognitive dysfunction and being a successful entrepreneur and I have to fix this for personal reasons, but I felt compelled to share this and to actually build the data which is why the forum is up and all that.
I feel like leading by example and getting enough of a critical mass for any scientific discovery like these sort of things that Tim and Seth and I are talking about, once enough people see it, it some how crosses the chasm, and then it becomes something that at least scientist look at without having their funding pulled. Is that?
Michael: I’m not actually interested in scientists looking at without their funding being pulled. I’m interested in scientists just freaking doing it for themselves. Finding out that their lives are better, and then doing more of it not just in the domains of weight loss and muscle development, but in the domains of optimizing personal relationships, optimizing their incomes, doing all of the others things that you might expect people to do.
It really looks like after a couple of years of doing that, they don’t need the jobs at DuPont. They don’t need the funding. They’re already friends with Bill Gates and [inaudible 00:45:13] as you say, why are they waiting NIH friends except that they are too socially inept to ask their friends for money in a way that was not socially erosive?
Dave: I definitely think you’re on to something there. This has to do with something that I see a lot in the medical fields and both in the science fields even where people who are really good at science are often times, and now I’m being a bit of a journalist here, but they are often times not trained in the techniques of going out and asking for money or in how to represent their ideas to someone who only has a half hour, but has a billion dollars.
Michael: Right. I’m not saying that they should be pitching their ideas, selling them. I’m saying quite literally, there is this network that already exists built around things like Ted and the Summit Theories and Food Camp and whatever. This network that already exists that contains within it hundreds and hundreds of billions of dollars, and it also contains within thousands and only thousands of really A list scientists, because there are only thousands of really A list scientists.
Currently, although the network largely consists of talking about and getting excited by their projects and the potential that these projects have, the funding for these projects coming from NIH rather than from the people that they go to parties with. Meanwhile, Gates is trying to fund science through the NIH battle folks, doing the antimalarial vaccines.
What I’m basically saying is that I think that the bureaucracy of contemporary science and business is sufficiently inefficient. That if people would build competing institutions, the early adopters of those competing institutions would become so wealthy and so otherwise independent, that these basically dehumanizing forces like various peer motives that shut people’s minds down rather than working at the possibilities that they should have to stand up for something, these influences would just not weigh on them, not weight as heavily, and they could ignore them easily.
Dave: That makes great sense and I don’t think I could argue with anything you just said there.
Michael: Cool. I’d love to work with you on trying to make that sort of thing happen, but my current method for doing so that essentially unfolds out of my desire to do this is this company Personalized Medicine. I’ve gotten together with the founders of Skype and of [inaudible 00:46:06] as my CEO and CTO.
I founded company and established myself as Chief Science Officer, and now I have 27 patients and the head of the American Academy of Private Physicians and the American Academy of Genetic Counselors and about a dozen employees and we’re kind of doing the thing that you were talking about, going through the information that’s already out there for kind of private clientèle to try to work out what tmhe literature actually says and sell it’s outputs.
I think that my basic reason for doing it this way is that people are much more likely to use information that they paid for. In general, I prefer a for-profit structure over an nonprofit structure. I think there are a lot of sound reasons for favoring it, but since you’re apparently running a Silicone Valley startup that has been gathering this data from the literature, I’m sure there are a lot of possibilities for us to collaborate.
The major concerns that I have are that it’s clear to me that most of these organizations tend not to have a high enough standard of analytical rigor to reliable distinguish between reliable and unreliable materials, but by gathering the ideas, gathering the possibilities, gathering suggested targets of focus, they’ve done a huge early step raising hypothesis. I would also like to talk to people who are involved in them about what would constitute analytical rigor to distinguish pretty reliably the valid from nonvalid claims.
Dave: It sounds to me like we should definitely get you on stage at the Silicone Valley Health Institute where we have about 100 people show up, every month there’s a meeting, a 100 people show up that are really into anti-aging, health and wellness. People who are more likely to participate in this study or people who have been really reading research and are bringing it up and we have a half hour section where anyone can talk about what they’re researching and ask help from the audience which includes medical professionals and researchers as well as just bio hackers like me.
It would be really interesting to have you come in and make a presentation about this. We would get it on video and of course put it up on the site as well, but mobilizing the cream of the crop of people who are really working on fixing stuff out of enlightened self interest might be the path.
Michael: Yes, that’s the path that I want to go down. I feel that you’ve got a community, there’s got to be people in that community who are quite good, there’s got to be a lot of hypothesis in that community that are good. I’m looking to basically hire people and scale this.
I’m basically hoping — what you guys at SVHI have been doing for 20 years basically, I’m trying to create a profession. I’m trying to professionalize — create a firm that where people can work and build [inaudible 00:51:30] and get kind of upper middle class, law firm, consulting firm style incomes by doing this sort of research.
Dave: That’s awesome.
Michael: Actually delivering it, yeah.
Dave: That’s simply awesome. If you’re not familiar with it, than this is also for people who are listening, Ralph Moss has a website called CancerDecisions.com and this is one of the earliest examples that I’ve seen about this, where Ralph is a medical professional who goes out and does extensive research on each type of cancer and ranks every treatment you can think of it for it including crazy alternatives.
Co-host: Wait a minute. We talked a lot about how funding influences people perceive science, and one of the largest sources of funding is the military. It seems a lot of — or it seems pretty much every new magazine coming out has a picture of a robot machine on the cover. Do you ever see something like this actually happening where we have killer robots or something or self aware machines and do you ever see this happening where iPhones — or do you ever see this happening in a garage maybe somebody tinkering with an iPhone or some other device, making it possibly damaging?
Michael: It seems to me that robots are real and are becoming ubiquitous. The 2020s are going to be a really big transition in terms of robotics. There are so much stuff that is awesome now at laboratory grade but not reliable enough to actually be a good purchase, but where quality of manufacturing are improving, price of manufacturing is improving, and it seems very likely that continuation of the last couple decades of robotics progress for another couple decades will produce a mostly robotic militarily and probably mostly robotic mostly everything [inaudible 00:53:25] manufacturing lots of different fields really.
Surgery, police, driving, trucking, shipping, boxing, whatever. That is one question.
Once you have lots of robots out, if they are networked and controlled via web, than hacking is totally a very real possibility. People could write virus to cause them to behave badly or get into back doors and control them directly. I’m not looking forward to some hacker taking over a fleet of UAVs and going over to bomb Libya. I don’t know how likely that is, but this sort of thing I would give an easily 20% probability too over the next 20 years.
You can also hack things with an iPhone, get into people’s bank accounts, all this typically stuff, and that becomes more powerful. As people really start to put machinery in their bodies, in the 2030s, late 2020s, I expect the possibilities for security with that to be very serious as well.
Although, I think a lot of the times people just won’t — even if you have an electronic connection to your heart rate controller, that electronic connection probably just won’t be online, because it would be a really bad idea to put that online.
Co-host: It’s gratifying to hear you talk about security about things like this. I work in the computer security field for one of the biggest companies as an executive. I think about this quite a lot where as we put the internet of things and the internet of people online that the security and privacy implications get bigger and bigger, and it’s one of the easiest ways to really break stuff is to hack the system that was designed without security in mind. I hope that people who are designing artificial intelligence have at least basic security training, but I haven’t necessarily always seen that in the past.
Michael: Okay. That’s an issue of robotics, an issue of security, an issue of cybernetics. When I talk about artificial intelligence, I’m really almost exclusively talking about — I mean I can talk about no artificial intelligence and then we have data security issues, but that seems like a pretty separate issue from general artificial intelligence.
When we talk about general artificial intelligence, I’m not really interested in whether it takes over the world by using robots or whether it just hacks people’s bank accounts and hires mobsters to do it. There are a lot of difference ways that influencing the world if you can manipulate information and the quality of robots is probably not the important. It might be.
Dave: That’s a really good point, and I’d like to explore that with you some more, but we’re running out of time in the interview.
Dave: Can you tell the listeners a little bit more about the Singularity Institute? Like where they can learn more about it or they can learn more about Less Wrong or contribute to your efforts? Basically what are the URLs or sites they should be heading to if they want to learn more?
Michael: The first place I would recommend is there’s a blog Wrong.Org and it has a item the Less Wrong Sequences. If you just Google Less Wrong Sequences it will get you there. It’s a long series of essays on critical thinking, human cognitive judgment and mistakes which is basically to my mind, the ground work that you should be going from if you want to try to get things right where almost everyone else is wrong, because if almost everyone else is wrong that scientific turns out things the same way they are, it just won’t work.
At a more kind of inspirational fun level, the same author wrote Harry Potter and the Methods of Rationality which is the most reviewed fan fiction on the web, which is suppose to teach critical thinking in conjunction with some other skills with a Harry Potter book. Totally appropriate for four year olds, much less a seven year old, totally appropriate for 70 year olds. Fairly fun read even if you hate Harry Potter like I do.
The Singularity Institute website is very nice. It’s undergoing some revision. The Singularity Summit videos are up online and they go back six years. That’s about a couple hundred hours of entertainment. It’s really surprising to almost everyone that things people are doing. I would recommend those videos a lot.
Finally, there’s the Future of Humanity Institute which is our kind of Oxford collaboration. Nick Bostram is a phenomenal philosopher. His papers are wonderful, so are Robin Hanson and Tyler [inaudible 00:58:40] George Mason University, who’s in the economics department, Future Humanity Institute is a [inaudible 00:58:47] department, but there’s a lot of overlap in relevant skills.
Dave: We will include links to all of those on the show notes so that our readers can come to the site and find you there. Michael Vasser, thanks again for representing the Singularity Institute on our radio show today.
Michael: Thank you, Dave, I’m glad we spoke.
Dave: If you haven’t had a chance, check out the new bullet proof airscape canister on UpgradedSelf.com. This is something we just put on the site and it’s really awesome. It’s a canister you can use in the kitchen to maintain freshness of anything, but especially your coffee beans.
It’s a 64 ounce, stainless steel canister with a special seal. You can push down the lid so it gets all the oxygen out which means your coffee beans stay fresher and other things that are likely to have problems from humidity in the air, things like mold form on them, things like product degradation, that becomes much less of a problem when you can just suck the air out using the simple lid.
I’ve moved to using these canisters throughout my kitchen. They are worth checking out for your own food safety not to just make your coffee last longer and taste better. The bullet proof airscape canister on UpgradedSelf.com. I’d appreciate if you checked that out.