Frontier: Difference between revisions

From Computer Laboratory Group Design Projects
Jump to navigationJump to search
No edit summary
No edit summary
 
(20 intermediate revisions by 2 users not shown)
Line 1: Line 1:
The client is Matt Johnson (mjohnson@frontier.co.uk).
Main contact is Matt Johnson <mjohnson@frontier.co.uk>, working with Olly Powell <opowell@frontier.co.uk>


==Project proposals for 2016:==
==Potential project for 2022==


===AFB/IML to comment - we select 1 or 2===


Option 1:


Indy Community Framework
==2021==


‘Independent’ or ‘Indy’ game development has become an exciting and
===Confirmed: [[Augmented Room Dressing for Zoom]]===
sometimes successful area. Many people are involved and some of the most
successful new properties in the games industry are forged in this area.
Projects are often developed by people meeting through forums and never
meeting face to face. Often designers will seek programmers and artists to
execute sections of their projects, and turn to sites such as Kickstarter
to find external funding to cover these costs, or need to invest their own
earnings and savings to do so.


The difficulty in this, is that when there are successes it can make it
difficult for projects to fairly represent all their contributors in it’s
success without complicated arrangements. Students are asked to consider
if they can build a framework to bring together a community of developers
and content creators to support their work together, and match the needs
and the providers and possibly also automate the tracking and ‘share’ of
work done by each party in a project, allowing finances to be split
representatively between the creators, allowing content providers a more
exciting participation and risk taking involvement, while knowing that
should the project succeed they have a clear entitlement to an amount of
returns relative to the success of the project, rather than a flat rate
for their hours.


Some careful consideration must go into how project assets are valued (How
Second project that was considered: Social Distance Modelling
many ‘points’ are agreed suitable for a certain job) and how can the
framework support the group in making and communicating these details.


Businesses would like to open up commercial spaces adapted for social distancing measures. But what's the best way to guide people through these spaces so they have the best chance of maintaining distance? Design a simulation that can model customers as particles that freely move around these public spaces keeping to an avoidance radius with each other. Include directional markup (one way systems) that control the flow of customers around the space. Track congestion metrics to evaluate the safety of the building. With a complete simulation implement machine learning techniques to generate the best directional markup for a given space. Perhaps there are some good common techniques that any business can learn from!


------


Option 2:
I wonder if the Feng Shui idea could be modified to provide some Covid relief? Perhaps something that generates a dynamic Zoom background that is based on a captured model of student’s actual room, but with augmentations?


Interior Environment Creation
My Logitech webcam has an app with controls to pre-process camera input before it goes to Zoom/Teams etc. However, perhaps it does this on-camera, since it seems to be OS-independent.


A constant difficulty in a large number of types of video games is the
==2020==
effort required in building suitable play environments, especially
interior spaces, whether it be buildings, space ships, or other player
explorable areas. Whilst for natural terrain many suitable automatic
generation solutions exist, interior spaces, particular with some kind of
functionality and utility services, such as power and water, are much
harder to generate usefully. To further complicate the problem most games
benefit from their terrains undergoing some design control to funnel play
in a particular direction. Your challenge is to produce a software tool
that allows the automatic creation of 3d interior spaces, giving some
level of design control to the results, to allow them to be reshaped by a
designer. Results should be viewable in 3d in a package or format of your
choice. While the output is not required to rival the current range of
latest games software, the system should be extensible to improve its
level of output quality.


--------
===Confirmed: [[Robotic Warehouse Design Suite]]===
Online shopping is taking over the world! A growth area in robotics at the moment is the use of robots and AI in the efficient running of warehouses and retail storage facilities. The team should produce a virtual warehouse simulation and demonstrate how AI may be applied on this training suite to control bots in their storage and retrieval of randomised streams of orders for items to allow the suite to find efficient methods of organising and driving a robotic warehouse area. How AI is applied to the system, whether in terms of organisational control, or manipulation, or both is up to the teams to decide.
===Locomotion System API middleware for games===
In today’s game development world there are middleware solutions for many of the problems facing games and simulation developers. One of the areas currently dominated by bespoke programming is the area of character locomotion through an environment. Focusing on 3d applications, we would be asking a project team to create a general library allowing developers to specify and create instances of controllable objects, taking navigation directives, and within the specified movement parameters of the object, navigating the world in which it exists to reach its objective. This may involve traversing uneven or barricaded terrain, and possibly even climbing walls or traversing ceilings where necessary. The solution must have a means of connecting with animation assets to control the movements of the object. The system must support bipedal and other forms of controllable objects.


Option 3:
===Accessibility Assessor===
Today, with smart phones being such powerful computational devices equipped with cameras the opportunity to capture data from the world around us has greatly increased. This project aims to produce a piece of software which can be used on a smartphone to capture the 3d shape of an interior environment in order to produce a simplified computer model. This model may then be processed, perhaps by another device in order to produce a node graphed plan of the floorspace. This plan could provide useful views of the space in terms of accessibility and aiding planning usage of existing space.
===Tidy Rescue Bot===
We are seeing an ever increasing use of AI in real world applications over time. One of the most useful potential applications is to enable robots to do work in environments which could be hazardous to humans. Such examples could be earthquakes, or the aftermath of tsunamis, or even the problematic resolution of the Chernobyl nuclear disaster. We would like the team to create a simplified 3d simulation of such hazardous environments and produce a prototype software system to show the possibilities of using AI to attempt to safely clear or access areas following disasters. This may involve clearing rubble, or identifying possible locations of victims. The bot should be demonstrable of working in virtual environment showing it’s decisions and processes to make a rescue attempt feasible.


With this one I wanted to consider the potential in doing something with
===Feng Shui Online===
the more modern aspects of game development which are arising as concerns
for developers at this point.
Human’s are always going to be useful when tasked with subjective problems. In Feng Shui Online you create a fun gamified physics sandbox of your room and present it to other players online to tidy. You then divide a quota of points to the players that made the best versions of your newly organised room. Does it spark joy? Keep the experience engaging so that players enjoy the process. With the data generated from the players train an AI to see if it can generalise and produce satisfying results.


The two things I was considering were either looking at asking the
==Project proposals for 2019:==
students to produce something utilising multi-context rendering, as
supported in Direct12. This would be a very focused technical project, and
there are an almost infinite number of directions and applications this
would take, but generally it would revolve around 'what bits of a render
pipeline can be run together and how do we shuffle things to make it
useful'.


The second was the potential of asking students to look at using OpenCL to
with Olly Powell
parallelise route finding, and character logic, ( or some other essential
game service which is quite heavily intertwined with the turn over of a
running product ).


With these options I've been mulling them over and umming and arring about
[[VR Avatar]]
whether they would be appropriate, or just 'too dry' or involved as they
may well not produce an exiting 'resulting app' given the time restraint
====Other suggestions not used this year====
for the project, and are mostly performance related projects.


Character Locomotion Middleware
In today’s game development world there are middleware solutions for many of the problems facing games and simulation developers. One of the areas currently dominated by bespoke programming is the area of character locomotion through an environment. Focusing on 3d applications, we would be asking a project team to create a general library allowing developers to specify and create instances of controllable objects, taking navigation directives, and within the specified movement parameters of the object, navigating the world in which it exists to reach its objective. This may involve traversing uneven or barricaded terrain, and possibly even climbing walls or traversing ceilings where necessary. The solution must have a means of connecting with animation assets to control the movements of the object. The system must support bipedal and other forms of controllable objects.


==Project proposals for 2015:==
Automatic accessibility assessor
Today, with smart phones being such powerful computational devices equipped with cameras the opportunity to capture data from the world around us has greatly increased. This project aims to produce a piece of software which can be used on a smartphone to capture the 3d shape of an interior environment in order to produce a simplified computer model. This model may then be processed, perhaps by another device in order to produce a node graphed plan of the floorspace. This plan could provide useful views of the space in terms of accessibility and aiding planning usage of existing space.
 
City generation
 
Create a program that can procedurally generate a city complete with a road network and basic infrastructure. The goal is to describe methods of generation that evaluate as well networked and functional cities whether exotic or traditional. The system should be able to highlight an estimated flow of people and traffic at various times of day depending on the function of buildings, such as consumer or business locations, and positioning of public services. Heat maps for access times for ambulance and fire services for example should be available derived from the road network.
 
==Project proposals for 2018:==
 
Preferred? [[Opposing Views]]
 
[[VR AI Ping Pong Trainer]]
 
Also with Eddy Aston in 2017 <eashton@frontier.co.uk>
 
-Simulation LOD-ing
 
Level-of-detail (LOD) optimisations have a long history in rendering. 3D
meshes and textures are simplified as objects move away from the camera,
avoiding wasteful computation where a user can no longer distinguish finer
details. There is increasing use of LOD-ing in other areas such as animation
and AI routines, allowong us to simulate large, active crowds with high
frequency details in the foreground. Can we use similar techniques to
simulate other systems which are otherwise too complex or large to be fully
computed? Perhaps a global weather simulation interacting with person-sized
physics objects, or a densely populated world with interactions ranging from
person-to-person up to empire-to-empire. The challenge is in finding a
generalised representation of the simulation at small scales which has a
stable, consistent approximation at larger scales, allowing a user to zoom
to any point and see deterministic results. Can you include user input to
alter the simulation, and simulate that change at different scales and over
time? You may wish to do this on constrained hardware such as a Raspberry
Pi, where an otherwise simple simulation may be too complex and so the gains
of LOD-ing are necessary.
 
 
 
 
 
 
 
 
==Project proposals for 2017:==
 
Creature Dash
In some video games over the years, and most notably of recent in No Man's Sky, attempts have been made to make procedural alien life forms, to give a sense of diversity and variety, yet avoiding the production difficulties of manually creating and animating such a vast range of potential life forms. A common thread in these products solutions has been to come up with a malleable connected graph that can represent a creature out of a small and finite number of given parameterised nodes.
The challenge is to build a platform which provides a number of parameterised 'primitive parts' which may be connected together by users in building block fashion to create automotive creatures with some kind of walk, or propellant movement cycle. The users may also use other provided simple building blocks to construct 'obstacle' courses to commit their creatures to, to explore their success in navigating them, and to see which creature finishes first. Innovations involving sensors, planning, navigation and movement are welcome, but must extend the modular parameterised system, rather than being specific to an individual creature. The only rule is there must be no user interaction with individual creatures during its progress on the course.
 
My version:
 
[[VR Algorave DJ]]
 
Why can't a DJ be more like an orchestra conductor, remixing instrumental sections and adding new expressive content, instead of simply selecting from a library of prerecorded tracks? In the future this will be possible, using a gesture controllers in virtual reality. We can provide a Myo gesture control armband. Your task is to create a VR space in which the DJs of the future will edit and configure algorave-style music synthesis programs. You can use APIs for the Sonic Pi music language from Cambridge's Sam Aaron as a back end to produce professional-standard musical results.
VR DJ
Virtual Reality is becoming an increasingly popular medium for computing applications, particularly in entertainment. There is currently a lot of undiscovered potential in VR, especially related to it's use in creativity and manipulation, rather than just in terms of 'experience'. This project aims to explore the more tactile and immediate possible uses for VR, to use a headset and application to create live, responsive audio/visual entertainment, through the use of virtual controls presented to the user, to allow them to mix audio, video and potentially other media presented to an audience in real-time.
VR Minigame BuildSpace
VR has become a serious contender in the field of video games and entertainment in the last few years. There is still limited applications for it in the this area which allow users to build and share their own games in a common virtual environment. Your task in this project will be to build on the backs of some existing technologies to develop such a space, allowing users to build operational games out of primitives, assets and rules, and potentially even collaborating to do so.
and slightly more high level
Plan -> VR
This project is about interpreting 2d building plans given in a commonly accepted format, and using this to produce a decorated and lit 3d environment from the plan, allowing walls and faces to be set with various materials, and so rendered as such from a palette of textures and models. The environments should be navigable via a convincing VR interface, potentially with the ability to manipulate and decorate them, from within
 
==Projects in 2016:==
 
[[Equity Exchange]]
 
==Projects in 2015:==


* [[Planet Builder]]
* [[Planet Builder]]


==Project proposals in 2014:==
==Projects 2014:==


* [[Rent-A-Mob]]
* [[Rent-A-Mob]]
Line 97: Line 147:
* [[Multi Chat]]
* [[Multi Chat]]


==Project proposals in 2013:==
==Projects in 2013:==


Contact: Ben Nicholson  bnicholson@frontier.co.uk
Contact: Ben Nicholson  bnicholson@frontier.co.uk


* [[A platform for live online modding]]
* [[A platform for live online modding]]

Latest revision as of 18:15, 5 November 2020

Main contact is Matt Johnson <mjohnson@frontier.co.uk>, working with Olly Powell <opowell@frontier.co.uk>

Potential project for 2022

2021

Confirmed: Augmented Room Dressing for Zoom

Second project that was considered: Social Distance Modelling

Businesses would like to open up commercial spaces adapted for social distancing measures. But what's the best way to guide people through these spaces so they have the best chance of maintaining distance? Design a simulation that can model customers as particles that freely move around these public spaces keeping to an avoidance radius with each other. Include directional markup (one way systems) that control the flow of customers around the space. Track congestion metrics to evaluate the safety of the building. With a complete simulation implement machine learning techniques to generate the best directional markup for a given space. Perhaps there are some good common techniques that any business can learn from!


I wonder if the Feng Shui idea could be modified to provide some Covid relief? Perhaps something that generates a dynamic Zoom background that is based on a captured model of student’s actual room, but with augmentations?

My Logitech webcam has an app with controls to pre-process camera input before it goes to Zoom/Teams etc. However, perhaps it does this on-camera, since it seems to be OS-independent.

2020

Confirmed: Robotic Warehouse Design Suite

Online shopping is taking over the world! A growth area in robotics at the moment is the use of robots and AI in the efficient running of warehouses and retail storage facilities. The team should produce a virtual warehouse simulation and demonstrate how AI may be applied on this training suite to control bots in their storage and retrieval of randomised streams of orders for items to allow the suite to find efficient methods of organising and driving a robotic warehouse area. How AI is applied to the system, whether in terms of organisational control, or manipulation, or both is up to the teams to decide.

Locomotion System API middleware for games

In today’s game development world there are middleware solutions for many of the problems facing games and simulation developers. One of the areas currently dominated by bespoke programming is the area of character locomotion through an environment. Focusing on 3d applications, we would be asking a project team to create a general library allowing developers to specify and create instances of controllable objects, taking navigation directives, and within the specified movement parameters of the object, navigating the world in which it exists to reach its objective. This may involve traversing uneven or barricaded terrain, and possibly even climbing walls or traversing ceilings where necessary. The solution must have a means of connecting with animation assets to control the movements of the object. The system must support bipedal and other forms of controllable objects.

Accessibility Assessor

Today, with smart phones being such powerful computational devices equipped with cameras the opportunity to capture data from the world around us has greatly increased. This project aims to produce a piece of software which can be used on a smartphone to capture the 3d shape of an interior environment in order to produce a simplified computer model. This model may then be processed, perhaps by another device in order to produce a node graphed plan of the floorspace. This plan could provide useful views of the space in terms of accessibility and aiding planning usage of existing space.


Tidy Rescue Bot

We are seeing an ever increasing use of AI in real world applications over time. One of the most useful potential applications is to enable robots to do work in environments which could be hazardous to humans. Such examples could be earthquakes, or the aftermath of tsunamis, or even the problematic resolution of the Chernobyl nuclear disaster. We would like the team to create a simplified 3d simulation of such hazardous environments and produce a prototype software system to show the possibilities of using AI to attempt to safely clear or access areas following disasters. This may involve clearing rubble, or identifying possible locations of victims. The bot should be demonstrable of working in virtual environment showing it’s decisions and processes to make a rescue attempt feasible.


Feng Shui Online

Human’s are always going to be useful when tasked with subjective problems. In Feng Shui Online you create a fun gamified physics sandbox of your room and present it to other players online to tidy. You then divide a quota of points to the players that made the best versions of your newly organised room. Does it spark joy? Keep the experience engaging so that players enjoy the process. With the data generated from the players train an AI to see if it can generalise and produce satisfying results.

Project proposals for 2019:

with Olly Powell

VR Avatar

Other suggestions not used this year

Character Locomotion Middleware

In today’s game development world there are middleware solutions for many of the problems facing games and simulation developers. One of the areas currently dominated by bespoke programming is the area of character locomotion through an environment. Focusing on 3d applications, we would be asking a project team to create a general library allowing developers to specify and create instances of controllable objects, taking navigation directives, and within the specified movement parameters of the object, navigating the world in which it exists to reach its objective. This may involve traversing uneven or barricaded terrain, and possibly even climbing walls or traversing ceilings where necessary. The solution must have a means of connecting with animation assets to control the movements of the object. The system must support bipedal and other forms of controllable objects.


Automatic accessibility assessor

Today, with smart phones being such powerful computational devices equipped with cameras the opportunity to capture data from the world around us has greatly increased. This project aims to produce a piece of software which can be used on a smartphone to capture the 3d shape of an interior environment in order to produce a simplified computer model. This model may then be processed, perhaps by another device in order to produce a node graphed plan of the floorspace. This plan could provide useful views of the space in terms of accessibility and aiding planning usage of existing space.


City generation

Create a program that can procedurally generate a city complete with a road network and basic infrastructure. The goal is to describe methods of generation that evaluate as well networked and functional cities whether exotic or traditional. The system should be able to highlight an estimated flow of people and traffic at various times of day depending on the function of buildings, such as consumer or business locations, and positioning of public services. Heat maps for access times for ambulance and fire services for example should be available derived from the road network.

Project proposals for 2018:

Preferred? Opposing Views

VR AI Ping Pong Trainer

Also with Eddy Aston in 2017 <eashton@frontier.co.uk>

-Simulation LOD-ing

Level-of-detail (LOD) optimisations have a long history in rendering. 3D meshes and textures are simplified as objects move away from the camera, avoiding wasteful computation where a user can no longer distinguish finer details. There is increasing use of LOD-ing in other areas such as animation and AI routines, allowong us to simulate large, active crowds with high frequency details in the foreground. Can we use similar techniques to simulate other systems which are otherwise too complex or large to be fully computed? Perhaps a global weather simulation interacting with person-sized physics objects, or a densely populated world with interactions ranging from person-to-person up to empire-to-empire. The challenge is in finding a generalised representation of the simulation at small scales which has a stable, consistent approximation at larger scales, allowing a user to zoom to any point and see deterministic results. Can you include user input to alter the simulation, and simulate that change at different scales and over time? You may wish to do this on constrained hardware such as a Raspberry Pi, where an otherwise simple simulation may be too complex and so the gains of LOD-ing are necessary.





Project proposals for 2017:

Creature Dash

In some video games over the years, and most notably of recent in No Man's Sky, attempts have been made to make procedural alien life forms, to give a sense of diversity and variety, yet avoiding the production difficulties of manually creating and animating such a vast range of potential life forms. A common thread in these products solutions has been to come up with a malleable connected graph that can represent a creature out of a small and finite number of given parameterised nodes.

The challenge is to build a platform which provides a number of parameterised 'primitive parts' which may be connected together by users in building block fashion to create automotive creatures with some kind of walk, or propellant movement cycle. The users may also use other provided simple building blocks to construct 'obstacle' courses to commit their creatures to, to explore their success in navigating them, and to see which creature finishes first. Innovations involving sensors, planning, navigation and movement are welcome, but must extend the modular parameterised system, rather than being specific to an individual creature. The only rule is there must be no user interaction with individual creatures during its progress on the course.

My version:

VR Algorave DJ

Why can't a DJ be more like an orchestra conductor, remixing instrumental sections and adding new expressive content, instead of simply selecting from a library of prerecorded tracks? In the future this will be possible, using a gesture controllers in virtual reality. We can provide a Myo gesture control armband. Your task is to create a VR space in which the DJs of the future will edit and configure algorave-style music synthesis programs. You can use APIs for the Sonic Pi music language from Cambridge's Sam Aaron as a back end to produce professional-standard musical results.


VR DJ

Virtual Reality is becoming an increasingly popular medium for computing applications, particularly in entertainment. There is currently a lot of undiscovered potential in VR, especially related to it's use in creativity and manipulation, rather than just in terms of 'experience'. This project aims to explore the more tactile and immediate possible uses for VR, to use a headset and application to create live, responsive audio/visual entertainment, through the use of virtual controls presented to the user, to allow them to mix audio, video and potentially other media presented to an audience in real-time.


VR Minigame BuildSpace

VR has become a serious contender in the field of video games and entertainment in the last few years. There is still limited applications for it in the this area which allow users to build and share their own games in a common virtual environment. Your task in this project will be to build on the backs of some existing technologies to develop such a space, allowing users to build operational games out of primitives, assets and rules, and potentially even collaborating to do so.


and slightly more high level

Plan -> VR

This project is about interpreting 2d building plans given in a commonly accepted format, and using this to produce a decorated and lit 3d environment from the plan, allowing walls and faces to be set with various materials, and so rendered as such from a palette of textures and models. The environments should be navigable via a convincing VR interface, potentially with the ability to manipulate and decorate them, from within

Projects in 2016:

Equity Exchange

Projects in 2015:

Projects 2014:

Projects in 2013:

Contact: Ben Nicholson bnicholson@frontier.co.uk