ARM
2021
Pawel Moll <Pawel.Moll@arm.com> is interested
2020
Pawel Moll <Pawel.Moll@arm.com> would be happy to receive an invitation
federico.garzadeleon@arm.com has been discussing volunteer work on digital projects at the Fitzwilliam Museum, and would like to see a group project on a related topic.
2019
Following discussion led to AI Chef with client Isabella.Gottardi@arm.com
Around 466 million people in the world, including 34 million children cannot use communication channels that most of us take for granted. It is estimated that by 2050, one in every ten people will have a disabling hearing loss and many more will suffer some sort of speech disability.
In UK, more than 70,000 people use the British Sign Language (BSL) as their main language. BSL has its own vocabulary and structure of grammar and is expressed through hand shapes, facial expression, gestures and body language.
Machine Learning and Neural Networks can help to translate gestures into words. We need your help to improve social interaction and bridge the gap between the Deaf and Hearing communities with efficient, interactive communication tools. Be part of the team that through a mobile app and using Arm Machine Learning Software offers a real-time translation from BSL to words.
feedback
I wasn’t aware that Arm has created a set of Machine Learning tools, and would like to learn more about them. This would certainly be a good technical focus for a project. Are you aware of a training corpus for BSL experiments? I’m not aware that we have any BSL users in our department at present, so if you had connections to a local BSL group, that would be very useful for testing and application focus.
2017
Khaled Benkrid <Khaled.Benkrid@arm.com>
Sean Hong <Sean.Hong@arm.com>
Ashkan Tousimojarad <Ashkan.Tousimojarad@arm.com>
Xabier Iturbe <Xabier.Iturbe@arm.com>
1. Hypbrid IoT System Development Framework
The concept of Internet-of-Things (IoT) is becoming a reality. As it evolves, we need ways to control and interact with plethora of connected devices.
There are several ways to develop interfaces for IoT devices. However, the range of technologies and skills involved makes the development cost too high for an average developer. In light of the above, our proposal is to develop an IoT system development framework to make it easier to design and build IoT systems.
HTML5 mobileUI frameworks, such as Ionic [1], allows for building cross-platform hybrid apps by using web development technologies: HTML5 and JavaScript. We also plan to integrate a cloud based “Backend as a Service" (BaaS) solution such as Firebase [2] into our framework as a part of cloud data storage (and computing).
The proposed hybrid IoT system development framework will include the following capabilities: · Easy to use Graphical User Interface (GUI) for app development at hub/cloud level
· Connectivity utilities, including Wifi and Bluetooth
· Location utilities, e.g. using GPS or Wifi
· Embedded systems programming and interfacing at node level (e.g. using ARM mbed [3])
At least one demonstrator application will be developed using the above framework (e.g. SWARM Intelligence)
[1] http://ionicframework.com/
[2] https://firebase.google.com/
[3] https://developer.mbed.org/cookbook/Interfacing-with-JavaScript
Clarification:
Since this is for second year students and there are time constraints, we might need to simplify the project. We are also open about the demonstrator application. Swarm Intelligence is just an option (e.g. depending on your lab facilities, a dozen of IoT-aided robotic swarms at a mission in a hazardous environment). Please feel free to suggest other applications.
What lab facilities were you thinking of? We can probably simulate a “hazardous” environment for the public demo day - and this could be good fun. We don’t have a dozen robots, though!
2. Convolutional Neural Networks – FPGA Prototyping
Convolutional Neural Networks (CNNs) and deep learning are expected to help processing the huge amount of data to be produced by IoT-enabled devices and are already being used in driverless cars to detect pedestrians and recognise traffic signals. The challenge here is to implement a generic and scalable CNN that can be easily adapted for a wide range of different applications.
We propose to design a software-configurable CNN and integrate it on an ARM CPU-centric System-on-Chip (SoC) prototyped on an FPGA. This system will be tested using a proof-of-concept application.
2015
Contact in 2015: Dominic Vergine <Dominic.Vergine@arm.com>
'Pawel Moll' <Pawel.Moll@arm.com>
'Lee Smith' <Lee.Smith@arm.com>
One of the major challenges in Ebola outbreak regions is information management. Most patient care is done by people with neither medical or IT experience, and often low levels of literacy. Their training is often only 3 or 4 days, mostly focusing on hygiene and use of protective clothing. Your goal is to create an electronic patient record system that will run on a smartphone, suited to the network connections, power supply and hardware limitations in rural Africa. The system should help regular collection and progression monitoring of symptom reports and vital signs as would be done in a hospital intensive care unit. It might also present users with advice on triage and patient care. Deployment should be easily customisable for local languages, and provide mechanisms to feed data back to international coordination bodies such as the World Health Organisation.
Online Identity for the Base of the Pyramid
Initiatives such as Girl Effect and Africa's Voices provide new content channels for the poorest people in developing countries to gain a visible online identity. How could you minimise the educational and financial obstacles to their visibility in the global media ecosystem? SMS hubs, or Raspberry Pi with cameras? Are keyboards, screens or batteries actually essential? You'll need to consider the whole system: network infrastructure, content propagation if servers aren't powered up 24/7, literacy and appropriate interaction mechanisms for those whose technical expertise may be limited, but who will be empowered by gaining new skills.
Children are often strong supporters of wildlife conservation, but also have pets of their own. The goal of this project is to help them engage with the science of wildlife tracking (via public data sources such as movebank.org), comparing that data to monitoring information that they collect themselves. Although Movebank uses GPS, sonar and other expensive techniques, you could use simple image analysis from a Raspberry Pi time-lapse camera to identify the position of a goldfish, a guinea pig or a hamster in its cage. Children should be able to use the data they collect to make comparisons between their own pets and wild animals - this might include foraging behaviour, "migrations" or seasonal variation in activity.
Other ideas::
"I'm sorry Dave, I'm afraid I can't do that" is one of the iconic quotation in SF cinema history. Let's make it happen again!
Raspberry Pi can process images using its camera interface and > audio using external sound interface. Basing on this, build a "HAL 9001" > module (as technology moved forward since 2001, it doesn't have to as > big as the original 9000 version, however extra points for a round, > red-glowing eye on its enclosure :-) that will recognize and greet > registered users (and record aliens), answer questions about time, > weather, notify the user about new email and/or social media messages, > read day schedule etc.
As a stretch goal, HAL 9001 "IoT Edition" could be also used to > control external appliances, for example lighting in the room (think "Computer! > Lights!" command used every second on board of NCC-1701* ships). > Communication should preferably be using Bluetooth Low Energy based > protocols, with mBed boards as nodes. The "smart home" could be also > simulated on an external PC/tablet and controlled via Rapberry Pi's > Ethernet (or WiFi) interface.