Mind Network meeting

April 20, 2018
School of Philosophy, Psychology and Language Sciences, University of Edinburgh

3.10/11
Dugald Stewart Building
Edinburgh EH8 9AD
United Kingdom

This will be an accessible event, including organized related activities

Sponsor(s):

  • Scots Philosophical Association

Speakers:

Cambridge University
Universität Erlangen-Nürnberg
Eindhoven University of Technology

Organisers:

University of Edinburgh

Topic areas

Talks at this conference

Add a talk

Details

The Mind Network holds regular meetings around the UK at which researchers in philosophy of mind and cognition present research papers. The goal of the meetings is to discuss work in progress and allow members of the community to get to know each other. Graduate students and new arrivals to the UK community are particularly welcome. For more information about the Mind Network, visit the website: http://mindcogsci.net/

Programme

  • 10.30-11.00: Registration & coffee
  • 11.00-12.30: Carlos Zednik (Magdeburg): FROM MACHINE LEARNING TO MACHINE
  • INTELLIGENCE
  • 12.30-13.00: Exchange Session
  • 13.00-14.00: Lunch (Own Arrangements)
  • 14.00-15.30: Marta Halina (HPS Cambridge): INSIGHTFUL AI
  • 15.30-16.00: Coffee Break
  • 16.00-17.30: Vincent C. Müller (Leeds): NEUROSURVEILLANCE
  • 17.30 onwards: Drinks and Dinner


Talks

Carlos Zednik (Magdeburg): FROM MACHINE LEARNING TO MACHINE INTELLIGENCE

Recent progress in artificial intelligence sparks new interest in an old philosophical question: Can machines think? In this talk I will consider the use of Machine Learning (ML) methods to develop intelligent thinking machines. Two criteria will be considered: behavioral indistinguishability and procedural (or algorithmic) similarity. It seems probable that ML methods will eventually yield computers that satisfy the former. But what about the latter? The inner workings of ML-programmed computers such as deep neural networks and reinforcement learning agents may be no easier to understand than those of human cognizers. Thus, I will review empirical methods for addressing this ‘Black Box Problem’, e.g. experimental techniques and methods of mathematical analysis. I will also consider a priori reasons for thinking that ML-programmed computers will not only become behaviorally indistinguishable from humans, but that they will also exhibit a degree of procedural similarity. Because these computers are nurtured and situated in the real-world environment that is also inhabited by humans, the methods they will acquire in order to engage that environment are likely to mirror our own. 

Marta Halina (HPS Cambridge): INSIGHTFUL AI

In March 2016, Google DeepMind’s computer programme AlphaGo surprised the world by defeating the world-champion Go player, Lee Sedol. Go is a strategic game with a vast search space (including many more legal positions than atoms in the observable universe), which humans have been playing and studying for over 3000 years. Watching the tournament, the Go community was struck by AlphaGo’s moves—they were surprising, original, “beautiful”, and extremely effective. The moves were described as “creative” by the Go community and in follow-up talks on the subject, Demis Hassabis—leading AI developer and CEO of Google DeepMind—defended them as such. Should we understand AlphaGo as exhibiting human-like insight? Answering this question requires having an account of what constitutes insightful thought in humans and developing tests for measuring this ability in nonhuman systems. 

In this talk, I draw on research in cognitive psychology to evaluate contemporary progress in AI, specifically whether new programs such as AlphaGo are best understood as exhibiting insight. Recent cognitive accounts of insight emphasise the importance of mental models (e.g., general causal models of the physical world) for generating insightful behaviour. Such models allow individuals to solve problems and make predictions in situations they have never encountered before. How do we determine whether and when new artificial agents are capable of employing such models? Here insights from comparative psychology can help. Over the last 40 years, comparative psychologists have been developing tests for identifying the use of mental models in nonhuman organisms. The application of such tests to AI may help us not only interpret Deep Neural Networks, but suggest ways in which the technology might be improved. 

Vincent C. Müller (Leeds): NEUROSURVEILLANCE

The traditional problem of surveillance or privacy concerns personal data and behaviour – and it is believed that what we humans think, feel, desire and plan must be private because access to these cognitive processes is practically impossible, or even impossible in principle. We argue that current technical developments in live brain-imaging, EEG, brain implants and other brain-computer interfaces (BCIs) make it practically possible to detect data from the brain, analyse that data and extract significant cognitive content – including content that is not accessible to the subject themselves. Though all current techniques require close proximity, they do not require a conscious or collaborative subject. We conclude that neurosurveillance is a real, current, threat to privacy. - These considerations have relevance for two traditional issues: a) the alleged epistemic inaccessibility of phenomenal content in ‘other minds’ and b) the relevance of the philosophy of mind for empirical questions, generally.

Supporting material

Add supporting material (slides, programs, etc.)

Reminders

Registration

Yes

April 20, 2018, 5:00am EET

External Site

Who is attending?

No one has said they will attend yet.

Will you attend this event?


Let us know so we can notify you of any change of plan.

RSVPing on PhilEvents is not sufficient to register for this event.