Bohemia Interactive Simulations: Difference between revisions

From Computer Laboratory Group Design Projects
Jump to navigationJump to search
No edit summary
No edit summary
Line 2: Line 2:


2017 idea:
2017 idea:
A Wearable-Virtual World Tie Up
Current wearable technology such as the Pebble Watch and Android Wear provide feedback to users such as vibration in the real world.  But what might the value be linking such technology to the virtual world?  The aim of this project is to create a technology demonstrator that links a virtual world/simulation to a wearable such as a Pebble Watch or at least an emulator.  Events that take place in the virtual world must trigger physical feedback in the wearable such as vibration.  The feasibility of actions in the real world being transmitted into the virtual world should also be investigated and ideally demonstrated (eg. pressing a button on the wearable will trigger an activity in the virtual world).  The purpose of this demonstrator will not be prescribed but we envisage that it might be used in training so that the user gets feedback from their training that goes beyond what they would get with a typical games controller, eg. a vibration signals that they have achieved or failed a task and/or their performance is recorded on the wearable.
Perhaps not for 2017:


Pictorial Turing Test
Pictorial Turing Test

Revision as of 14:47, 13 October 2016

Andy Fawkes <andy.fawkes@bisimulations.com>

2017 idea:

A Wearable-Virtual World Tie Up

Current wearable technology such as the Pebble Watch and Android Wear provide feedback to users such as vibration in the real world. But what might the value be linking such technology to the virtual world? The aim of this project is to create a technology demonstrator that links a virtual world/simulation to a wearable such as a Pebble Watch or at least an emulator. Events that take place in the virtual world must trigger physical feedback in the wearable such as vibration. The feasibility of actions in the real world being transmitted into the virtual world should also be investigated and ideally demonstrated (eg. pressing a button on the wearable will trigger an activity in the virtual world). The purpose of this demonstrator will not be prescribed but we envisage that it might be used in training so that the user gets feedback from their training that goes beyond what they would get with a typical games controller, eg. a vibration signals that they have achieved or failed a task and/or their performance is recorded on the wearable.

Perhaps not for 2017:

Pictorial Turing Test

The traditional Turing Test is usually implemented as a text chat session. Some trivial versions such as CAPTCHA test the ability to translate pictures into text. But would it be possible for a machine to pass a Turing Test using pictures alone, with no text at all? “Questions” might take the form of maps, plans, charts or photographs, and “Answers” could be interpretive drawings, perhaps in the style of Cohen’s Aaron or Colton's Painting Fool. The only rule is no text or numbers – imagine giving an iPad to someone who speaks a different language, where you’d like them to believe that they are communicating with a real person as they draw on the screen or capture images.


2016 project:

Surprise the Singularity

earlier drafts ....

If a super-intelligent artificial intelligence takes over the world, Cambridge is likely to be the first target. Unfortunately, we have published important strategic information online, where the Singularity can easily find it. For example, the Computer Lab layout is at https://www.cl.cam.ac.uk/research/dtg/openroommap/, and the University map at https://wiki.cam.ac.uk/university-map/. Your task is to confuse the Singularity by creating distractor maps, navigated in way that a disembodied mind might not realise are impossible, for example as Moebius strips or non-Euclidean spaces. Don't show the whole map at once, where the edges will spoil the illusion. But do include a simulation of real activity - public transport synced with real-time information from Cambridge buses, simulated self-driving cars, and of course locative social media messages from the (simulated) people in the panicking crowds.


Using the Computer Lab as a target might be good. We do have an open format map of the building:

https://www.cl.cam.ac.uk/research/dtg/openroommap/

There is also a useful API that can be used for Cambridge-wide applications based on an open map of the University:

https://wiki.cam.ac.uk/university-map/Map_Annotation

If we were looking at spread of information across Cambridge, it would be possible to use the following system as a source, based on advertised events in a particular location:

http://talks.cam.ac.uk/document/XML%20feeds

For example, we could model how long it takes for news of a human extinction event to propagate from a seminar in this series, to the various machine rooms that might house an AI:

http://talks.cam.ac.uk/show/archive/52792