Boeing: Difference between revisions

From Computer Laboratory Group Design Projects
Jump to navigationJump to search
No edit summary
No edit summary
 
(8 intermediate revisions by the same user not shown)
Line 1: Line 1:
Contact: Richie Jones <richard.jones16@boeing.com>
Contact: Richie Jones <richard.jones16@boeing.com>


Micro-friends video diary
Suggestion for 2021:


This is your chance to be the next Instagram! As more people carry video-capture devices (Google Glass, Go Pro) we collect hours of video. Some of those hours include sequences of friends enjoying themselves. But nobody has time to review and edit all that footage. Your task is to use a face detection algorithm (Viola Jones works well) to extract those precious seconds where a friend's face is moving enough to be exciting. If your friends are a little less expressive, you can crop or speed them up as necessary. The goal is to turn the most engaging video extracts into a collection of animated GIFs, each one or two seconds long, that are embedded in a web page to provide a moving diary of your social life.
Proposed client: Luke Baxter (luke.b.baxter@boeing.com)
 
Interested in use of probabilistic programming approaches that allows decision makers to intuitively specify and explore policy models when interacting with autonomous agents.
 
===2017===
 
I wonder if we should propose something that applies machine vision to existing social media video streams? The Periscope API has now been closed down, but I heard that WeChat has both good API and Video support.
 
[[Who's at my party?]]
 
===2016===
 
[[Architecture for a Video Facebook]]
 
===earlier suggestions===
 
Non-linear Video Synthesis
 
Digital video-editing suites such as Adobe Premiere and Apple iMovie are stuck in a 20th century linear film model: a sequence of clips is joined end-to-end, possibly with overlap transitions. However in today's films, live action and digital graphics are far more integrated, often with animated characters and blue screen footage composed onto a live video scene overlaid with simulated weather. We already have tools for digital music production that blend many overlapping tracks, some live and some synthesised, with many concurrent filters and effects. Your task is to build a system for creating non-linear combinations of video source material, animations and filters. You should model it on the SuperCollider system for sound and music synthesis, which constructs a graph of unit generators passing data streams between them.
 
Another alternative:
 
YouTube meets Facebook
 
Kids love YouTube, but video is currently very one-dimensional - play, rewind, fast-forward. In contrast, the Facebook "timeline" is actually a multimedia narrative, weaving in conversations, status updates, links to friends and so on. Your task is to make a non-linear version of YouTube, in which videos can be mixed with each other, and with text and drawings, allowing each user to create their own narrative storyline. The architecture to support arbitrary non-linear combinations of different media will be a technical challenge. You could model it on the SuperCollider system for sound and music synthesis, which constructs a graph of unit generators passing data streams between them, to allow filters, blends and graphic overlays. Start with a locally-hosted version, and think about a cloud service version (probably with lots of local media caching) as an extension.
 
Yet another alternative:
 
 
 
===2015 project===
 
[[Micro-friends video diary]]

Latest revision as of 17:15, 13 October 2020

Contact: Richie Jones <richard.jones16@boeing.com>

Suggestion for 2021:

Proposed client: Luke Baxter (luke.b.baxter@boeing.com)

Interested in use of probabilistic programming approaches that allows decision makers to intuitively specify and explore policy models when interacting with autonomous agents.

2017

I wonder if we should propose something that applies machine vision to existing social media video streams? The Periscope API has now been closed down, but I heard that WeChat has both good API and Video support.

Who's at my party?

2016

Architecture for a Video Facebook

earlier suggestions

Non-linear Video Synthesis

Digital video-editing suites such as Adobe Premiere and Apple iMovie are stuck in a 20th century linear film model: a sequence of clips is joined end-to-end, possibly with overlap transitions. However in today's films, live action and digital graphics are far more integrated, often with animated characters and blue screen footage composed onto a live video scene overlaid with simulated weather. We already have tools for digital music production that blend many overlapping tracks, some live and some synthesised, with many concurrent filters and effects. Your task is to build a system for creating non-linear combinations of video source material, animations and filters. You should model it on the SuperCollider system for sound and music synthesis, which constructs a graph of unit generators passing data streams between them.

Another alternative:

YouTube meets Facebook

Kids love YouTube, but video is currently very one-dimensional - play, rewind, fast-forward. In contrast, the Facebook "timeline" is actually a multimedia narrative, weaving in conversations, status updates, links to friends and so on. Your task is to make a non-linear version of YouTube, in which videos can be mixed with each other, and with text and drawings, allowing each user to create their own narrative storyline. The architecture to support arbitrary non-linear combinations of different media will be a technical challenge. You could model it on the SuperCollider system for sound and music synthesis, which constructs a graph of unit generators passing data streams between them, to allow filters, blends and graphic overlays. Start with a locally-hosted version, and think about a cloud service version (probably with lots of local media caching) as an extension.

Yet another alternative:


2015 project

Micro-friends video diary