Speaker: Peter Doherty

team-prof-peter-dohertyProfessor Peter Doherty – Nobel Laureate

Doherty‘s research focuses on the immune system. His Nobel work described how the body’s immune cells protect against viruses. He and Rolf Zinkernagel, the co-recipient of the 1996 Nobel Prize in Physiology or Medicine, discovered how T cells recognize their target antigens in combination with major histocompatibility complex (MHC) proteins.

Peter Charles Doherty AC (born 15 October 1940) is an Australian veterinary surgeon and researcher in the field of medicine. He received the Albert Lasker Award for Basic Medical Research in 1995, the Nobel Prize in Physiology or Medicine jointly with Rolf M. Zinkernagel in 1996 and was named Australian of the Year in 1997.[1] In the Australia Day Honours of 1997, he was named a Companion of the Order of Australia for his work with Zinkernagel. Zinkernagel was named an honorary Companion. He is also a National Trust Australian Living Treasure. He had a younger brother named Ian and had two parents named Linda and Eric. He skipped a grade in school and entered the University of Queensland when he was 17.
360194-peter-doherty

Research

corbis_rm_photo_of_T_cell_on_dendritic_cellDoherty’s research focuses on the immune system and his Nobel work described how the body’s immune cells protect against viruses. He and Rolf Zinkernagel, the co-recipient of the 1996 Nobel Prize in Physiology or Medicine, discovered how T cells recognize their target antigens in combination with major histocompatibility complex (MHC) proteins.

Viruses infect host cells and reproduce inside them. Killer T-cells destroy those infected cells so that the viruses cannot reproduce. Zinkernagel and Doherty discovered that, in order for killer T cells to recognize infected cells, they had to recognize two molecules on the surface of the cell – not only the virus antigen, but also a molecule of the major histocompatibility complex (MHC). This recognition was done by a T-cell receptor on the surface of the T cell. The MHC was previously identified as being responsible for the rejection of incompatible tissues during transplantation. Zinkernagel and Doherty discovered that the MHC was responsible for the body fighting meningitis viruses too.

Peter Doherty – Scepticism, Denial and Ignorance: There is a Difference – Vic Skeptics 2013

Nobel Laureate Dr Peter Doherty speaks at Skeptics Victoria! http://vicskeptics.wordpress.com/ Professor Peter Doherty presents to the Victorian Skeptics on Monday 18 March 2013. The talk is titled Scepticism, Denial and Ignorance: There is a Difference.

Pandemics: What Everyone Needs to Know

Nobel Laureate Peter Doherty offers a level-headed guide to all aspects of pandemics-what they are, how they spread, and what we can do to prevent them.

Pandemics. The word conjures up images of horrific diseases sweeping the globe and killing everyone in their path. But such highly lethal illnesses almost never create pandemics. The reality is deadly serious but far more subtle.

In Pandemics, Peter Doherty, who won the Nobel Prize for his work on how the immune system recognizes virus-infected cells, offers an essential guide to one of the truly life-or-death issues of our age. In concise, question-and-answer format, he explains the causes of pandemics, how they can be counteracted with vaccines and drugs, and how we can better prepare for them in the future. Doherty notes that the term “pandemic” refers not to a disease’s severity but to its ability to spread rapidly over a wide geographical area. Extremely lethal pathogens are usually quickly identified and confined. Nevertheless, the rise of high-speed transportation networks and the globalization of trade and travel have radically accelerated the spread of diseases. A traveler from Africa arrived in New York in 1999 carrying the West Nile virus; one mosquito bite later, it was loose in the ecosystem. Doherty explains how the main threat of a pandemic comes from respiratory viruses, such as influenza and SARS, which disseminate with incredible speed through air travel. The climate disruptions of global warming, rising population density, and growing antibiotic resistance all complicate efforts to control pandemics. But Doherty stresses that pandemics can be fought effectively. Often simple health practices, especially in hospitals, can help enormously. And research into the animal reservoirs of pathogens, from SARS in bats to HIV in chimpanzees, show promise for our prevention efforts.

Calm, clear, and authoritative, Peter Doherty’s Pandemics is one of the most critically important additions to the What Everyone Needs to Know series.

Links

Peter-Doherty---Vic-Skeptics-M64A9389

Speaker: Andrew Dun

Andrew Dun, former philosophy graduate student.

Andrew’s work in the philosophy of mind is accompanied by an ongoing interest in the social and cultural impacts of emerging technologies.

Andrew’s philosophical work relates to the ontology consciousness. Andrew proposes the view that the relationship between physical and phenomenal properties is one of mutual representation, rather than supervenience. Here, neither domain is regarded as more fundamental. This relationship can be considered as analogous to wave-particle duality as described on some views of quantum mechanics, but where instead a duality is taken to obtain between the physical and phenomenal domains themselves, rather than their constituents.

Andrew argues that on this view we can factor out ontology and approach questions about consciousness from the perspective of functional materialism. This approach can inform our questions about consciousness as they relate to applications such as strong artificial intelligence and mind uploading.

Andrew previously presented at the Melbourne Singularity Summit, 2012 (See Below footage ‘Zombie Rights’).

Zombie Rights

Andrew Dun spoke at the Singularity Summit Australia 2012. Talk title : “Zombie Rights”.

A discussion of the rights of sentient entities. Drawing inspiration from quantum complementarity, defends a complementary notion of ontological dualism, countering zombie hypotheses. Sans zombie concerns, ethical discussions should therefore focus on assessing consciousness purely in terms of the physical-functional properties of any putatively conscious entity.

Slides for presentation can be found on slide-share!

Speaker: David Pearce

David Pearce

David Pearce

David Pearce will be speaking at  Science, Technology & the Future on Nov 30 – Dec 1st 2013 in Melbourne Australia  – he is a British utilitarian philosopher who believes and promotes the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient life. His book-length internet manifesto The Hedonistic Imperative outlines how technologies such as genetic engineering, nanotechnology, pharmacology, and neurosurgery could potentially converge to eliminate all forms of unpleasant experience among human and non-human animals, replacing suffering with gradients of well-being, a project he refers to as “paradise engineering”. A transhumanist and a vegan, Pearce believes that we (or our future posthuman descendants) have a responsibility not only to avoid cruelty to animals within human society but also to alleviate the suffering of animals in the wild.

Pearce is the owner of BLTC Research, a website that was set up by Pearce in 1995. Based in Kemptown, Brighton, UK, the site publishes online texts in support of the biochemical and biotechnological methods by which its proponents believe sentient suffering could be abolished in future generations.

In 1998, Pearce co-founded the World Transhumanist Association (WTA) with Nick Bostrom, an Oxford philosopher. The association, which later changed its name to Humanity+, advocates transhumanism — an ideology and movement which has emerged to support the recognition and protection of the right of citizens either to maintain or modify their own minds and bodies so as to guarantee them the freedom of choice and informed consent of using human enhancement technologies on themselves and their children.

In 2002 Pearce co-founded the Abolitionist Society with Pablo Stafforini, Sean Henderson, and Jaime Savage, in order to help promote the idea of abolitionism of suffering and to discuss the implications involved with a wider range of audience.

Pearce sits on the board of Elsevier’s journal Medical Hypotheses and holds a position at the advisory board of Lifeboat Foundation.[10] He runs a web hosting company.

The Hedonistic Imperative

The Hedonistic Imperative outlines how genetic engineering and nanotechnology will abolish suffering in all sentient life.

pearce david interview with adam fordThe abolitionist project is hugely ambitious but technically feasible. It is also instrumentally rational and morally urgent. The metabolic pathways of pain and malaise evolved because they served the fitness of our genes in the ancestral environment. They will be replaced by a different sort of neural architecture – a motivational system based on heritable gradients of bliss. States of sublime well-being are destined to become the genetically pre-programmed norm of mental health. It is predicted that the world’s last unpleasant experience will be a precisely dateable event.

Two hundred years ago, powerful synthetic pain-killers and surgical anesthetics were unknown. The notion that physical pain could be banished from most people’s lives would have seemed absurd. Today most of us in the technically advanced nations take its routine absence for granted. The prospect that what we describe as psychological pain, too, could ever be banished is equally counter-intuitive. The feasibility of its abolition turns its deliberate retention into an issue of social policy and ethical choice.

This manifesto combines far-fetched utopian advocacy with cold-headed scientific prediction. The Hedonistic Imperative outlines how nanotechnology and genetic engineering will eliminate aversive experience from the living world. Over the next thousand years or so, the biological substrates of suffering will be eradicated completely. “Physical” and “mental” pain alike are destined to disappear into evolutionary history. The biochemistry of everyday discontents will be genetically phased out too. Malaise will be replaced by the biochemistry of bliss. Matter and energy will be sculpted into life-loving super-beings animated by gradients of well-being. The states of mind of our descendants are likely to be incomprehensibly diverse by comparison with today. Yet all will share at least one common feature: a sublime and all-pervasive happiness.

This feeling of absolute well-being will surpass anything contemporary human neurochemistry can imagine, let alone sustain. The story gets better. Post-human states of magical joy will be biologically refined, multiplied and intensified indefinitely. Notions of what now passes for tolerably good mental health are likely to be superseded. They will be written off as mood-congruent pathologies of the primordial Darwinian psyche. Such ugly thoughts and feelings will be diagnosed as typical of the tragic lives of emotional primitives from the previous era. In time, the deliberate re-creation of today’s state-spectrum of normal waking and dreaming consciousness may be outlawed as cruel and immoral.

Such speculations may currently sound fantastical. Yet the ideas behind this manifesto may one day be regarded as intellectually trite – albeit today morally urgent. For as the genetic revolution in reproductive medicine unfolds, what might once have been the stuff of millennialist fantasy is set to become a scientifically feasible research program. Its adoption or rejection will become, ultimately, a social policy issue. Passively or actively, we will have to choose just how much unpleasantness we wish to create or conserve – if any – in eras to come. David Pearce - the Hedonistic Imperative Chapter 1 - The Naturalisation of Heaven

Prophetic Narratives: Will Humanity’s Successors Also Be Our Descendants?

David Pearce spoke at Humanity+ @San Francisco in 2012 – Accelerating technological progress leads some futurists to predict the imminent end of the transhuman era and the dawn of posthuman superintelligence. But what is superintelligence? How does intelligence relate to sentience? What are the Explanatory Gap, Moravec’s Paradox, and the Binding Problem? Will nonbiological machines ever be more than zombies? This talk explores three different narratives for the major evolutionary transition in prospect. In the first narrative, biological humans will rewrite our genetic source code, recursively self-edit our own minds, and bootstrap our way to full-spectrum superintelligence. Mastery of our reward circuitry will deliver life based on information-sensitive gradients of bliss. In the second, Kurzweilian narrative, cybernetic brain implants will enable humans to fuse our minds with artificial intelligence; and also allow humans to scan, digitize and “upload” ourselves into a less perishable substrate. In digital nirvana, the distinction between biological and nonbiological machines will effectively disappear. In the third scenario, most closely associated with mathematician I.J. Good, is a combination of Moore’s law and the advent of recursively self-improving software-based minds will culminate in an ultra-rapid Intelligence Explosion and an era of nonbiological superintelligence. Posthuman superintelligence may or may not be human-friendly. How strong is the supporting evidence for each of these prophecies?

Interviews

Alleivating Suffering & achieving Hedonic Zero / Altruism

The Naturalisation of Heaven – The Lotus Eaters – Happiness & Motivation

The Hedonistic Imperative outlines how genetic engineering and nanotechnology will abolish suffering in all sentient life. This project is ambitious but technically feasible. It is also instrumentally rational and ethically mandatory. The metabolic pathways of pain and malaise evolved only because they once served the fitness of our genes. They will be replaced by a different sort of neural architecture. States of sublime well-being are destined to become the genetically pre-programmed norm of mental health. The world’s last aversive experience will be a precisely dateable event.David Pearce
I predict we will abolish suffering throughout the living world. Our descendants will be animated by gradients of genetically pre-programmed well-being that are orders of magnitude richer than today’s peak experiences.
– p.114 Ethics Matters by Peter and Charlotte Vardy – SCM Press, 2012David Pearce
Video Interviews

For more video interviews please Subscribe to Adam Ford’s YouTube Channel

genomic-bodhisattva

Genomic Bodhisattva interview at H+ Magazine

Marcus Hutter – Universal Artificial Intelligence

Universal Artificial Intelligence

hedbot_aixi_bubble_smallLast year I did a series of interviews with Marcus Hutter while he was down in Melbourne for the Singularity Summit Australia 2012.

Marcus will also be speaking at the [highlight]Science, Technology & the Future conference in Melbourne on Nov 30 – Dec 1 2013 in Melbourne, Australia.[/highlight]

Hutter uses Solomonoff’s inductive inference as a mathematical formalization of Occam’s razor. Hutter adds to this formalization the expected value of an action: shorter (Kolmogorov complexity) computable theories have more weight when calculating the expected value of an action across all computable theories which perfectly describe previous observations.

At any time, given the limited observation sequence so far, what is the Bayes-optimal way of selecting the next action? Hutter proved that the answer is to use Solomonoff’s universal prior to predict the probability of each possible future, and execute the first action of the best policy (a policy is any program that will output all the next actions and input all the next perceptions up to the horizon). A policy is the best if, on a weighted average of all the possible futures, it will maximize the predicted reward up to the horizon. He called this universal algorithm AIXI.

Below is the transcription of the part of the interview series where Marcus talks about intelligence, Bounded Rationality, and AIXI.

What is Intelligence?

marcus hutter - interview with adam ford[dropcap]I[/dropcap]ntelligence is a very difficult concept (maybe that’s the reason why many people try to avoid diluting it or consider more narrow alternatives). I’ve worked on this question for many many years now. We went through the literature; psychology literature, philosophy literature; AI literature – what individuals, researchers, and also groups came up with definitions, they are very diverse. But there seems to be one recurrent theme and if you want to put it in one sentence, then you could define intelligence as:
“an agents ability to achieve goals in a wide range of environments”, or to succeed in a wide range of environments.
Now look at this sentence and ask, “wow, how can this single sentence capture the complexity of intelligence?” There are two answers to this question. First: many aspects of intelligence are emergent properties of intelligence, like being able to learn – if I want to succeed or solve a problem I need to acquire new knowledge, so learning is an emergent phenomenon of this definition.
And the second answer is: this is just a sentence that consists of a few words, what you really have to do, and that’s the hard part, is to transform it into meaningful equations and then study these equations: And that’s what I have done in the last 12 years.

Bounded Rationality

marcus_hutter_singularitysummit_australia_2012_1037x691[dropcap]I[/dropcap]t is an interesting question whether resource bounds should be included in any definition of intelligence or not, and the natural answer is of course they should. Well there are several problems: the first one is that nobody ever came up with a reasonable theory of bounded rationality (people have tried), so it seems to be very hard. And this is not specific to AI or intelligence, but it seems to be symptomatic in science. If you look at the several fields (i.e. the crown physics discipline) theories have been developed: Newton’s mechanics, General Relativity Theory, Quantum Field theory, the Standard Model of Particle Physics. They are more and more precise, but they get less and less computable, and having a computable theory is not a principle in developing these theories, of course at some point you have to test these theories and you want to do something with them, and then you need a computable theory – this is a very difficult issue (and you have to approximate them or do something about it) – but having computational resources built into the fundamental theories, that is at least in physics, and if you look at other disciplines, that is not how things work.
You design theories so that they describe your phenomenon as well as possible and the computational aspect is secondary. Of course if it is in-computable and you can’t do anything with it, you have to come up with another theory, but this always comes second. And only in computer science (and this comes naturally) computer scientists try to think about how they can design an efficient algorithm to solve my problem, and since AI is sitting in the computer science department traditionally, the mainstream thought is “how can I build a resource bounded artificial intelligent system”. And I agree that ultimately this is what we want. But the problem is so hard, that we (or a large fraction of the scientists) should take this approach, model the problem first, define the problem first, and once we are confident that we have solved this problem, then go to the second phase, and try to approximate the theory, try to make a computational theory out of it. And then there are many many possibilities, then you could still try to develop a resource bounded theory of intelligence, which will be very very hard if you want to have it principled, or you do some heuristics… or .. or .. or… many options. Or the short answer maybe I am not smart enough to come up with a resource bounded theory of intelligence, therefore I have only developed one without resource constraints (that would be the short answer).

AIXI

aixi1line[dropcap]O[/dropcap]k so now we have this informal definition that intelligence is an agent’s ability to succeed or achieve goals in a wide range of environments. The point is you can formalize this theory, and we have done that and it is called AIXI. Or Universal AI is the general field theory and AIXI is the particular agent which acts optimally in this sense.
So that works as follows: it has a planning component, and it has a learning component. What the learning component does is: think about a robot walking around in the environment, and at the beginning it has little or no knowledge about the world, so what it has to do is to acquire data/knowledge of the world and then build its own model of the world, how the world works. And it does that using very powerful general theories on how to learn a model from data, from very complex scenarios. This theory is rooted in Kolomogrov complexity, algorithmic information theory – the basic idea is you look for the simplest model which describe your data sufficiently well. And this agent or robot has to do this continuously, gets new data and updates its model. So now the agent has this model, that is the learning part. Now it can use this model for predicting the future… And then it uses these predictions in order to make decisions, so the agent now thinks if I do this action, and this action… this will now happen and this is good or bad. I’ll come to the good or bad part soon. And if I do this other action it is maybe better or worse. And then the “only” thing what the agent has to do is think about all the potential future action sequences and take the one which is best according to the model which the agent has learned, which is not perfect but which over time gets better and better. Finally you have to qualify what does “best” mean, and that’s the utility part or succeeding: the agent gets occasional reward from the teacher, who could be just a human or the reward could be built in (for instance if the battery level is low it is bad, if it’s high it is good, if it finds a rock on Mars it is good, if it falls down a cliff it’s bad), so we have these rewards, and the goal of the agent is to maximize his reward over it’s lifetime. That’s the planning part. So first comes the learning part, then the prediction part, then the planning part, and then it gets to actions and the cycle continues.
harcus hutter blue backgroundSo this theory, the AIXI agent, it’s mathematically rigorously well defined. It is essentially unique, and you can prove amazing properties of this agent – in a certain sense you can prove that it’s the most intelligent system possible. I am translating the mathematical theorems into words, which is a little tricky but that’s the essence. The downside is that it’s in-computable. You asked before about the resource bounded intelligence so AIXI needs infinite computational resources, and in order to do something with it you need to approximate it, and we have done this in recent years also. At the moment it is at the toy stage so it can play PacMan, Tic Tac Toe, some simple form of Poker, and some other games… The point is not that it is able to play PacMan or Tic-Tac-Toe (they are not hard), the point is that the agent has no knowledge about these games, it starts really blank, and just by interacting with the environment – it does not even know the rules of the game – by interacting with this poker environment or PacMan environment it figures out what is going on, and learns how to behave well.
The cool thing really is and the difference to many other projects (there is Deep Blue who plays chess better than the Grand Masters, but it was systems specifically designed to play chess, and it can’t play go), this system is not tailored to any particular application. If you interface it with any problem (in theory it can be any problem: chess, solving a scientific problem) it will learn to do that very well and indeed optimally. The approximations we have at the moment, are of course, very limited, but if you look at these approximations they use standard compressors for the model learning part; There is nothing about PacMan in these data compressors: they are standard data compressors. For the planning part we use standard Monte-Carlo (random search) which has nothing to do with a particular problem, or a game – and this approximation is already able to learn by itself {these various games}. There is no PacMan knowledge built in. The only thing (of course) you have to do is to interface the game with this agent For PacMan you have these pixels in a 15×15 grid, and each square is a wall, is free, is food or there is a ghost, and this piece of information you give this agent and then it gets negative reward if it gets eaten by a ghost, positive reward if it eats a pallet, and that’s it, and the goal of the agent is to maximize reward, and everything else is figured out by itself.

Video Interviews

For more video interviews please Subscribe to Adam Ford’s YouTube Channel

YoutTube Playlist of Interview Series with Marcus Hutter:

At Singularity Summit Australia 2012 – “Can Intelligence Explode?”

Speaker: Marcus Hutter

Marcus Hutter (born 1967) is a German computer scientist and professor at the Australian National University. Hutter was born and educated in Munich, where he studied physics and computer science at the Technical University of Munich. In 2000 he joined Jürgen Schmidhuber’s group at the Swiss Artificial Intelligence lab IDSIA, where he developed the first mathematical theory of optimal Universal Artificial Intelligence, based on Kolmogorov complexity and Ray Solomonoff’s theory of universal inductive inference. In 2006 he also accepted a professorship at the Australian National University in Canberra.

Hutter’s notion of universal AI describes the optimal strategy of an agent that wants to maximize its future expected reward in some unknown dynamic environment, up to some fixed future horizon. This is the general reinforcement learning problem. Solomonoff/Hutter’s only assumption is that the reactions of the environment in response to the agent’s actions follow some unknown but computable probability distribution.

Read more