Mobile mirror shopping assistant: Difference between revisions

From Crucible Network Research Projects
Jump to navigationJump to search
(Created page with "Student design project carried out for Clifford Dive at Qualcomm. The original design brief was: Some people find it extremely difficult to imagine how they might look...")
 
No edit summary
 
Line 12: Line 12:
this project:
this project:


- acquisition suitable image of user
* acquisition suitable image of user
- identification of the product (bar code reader?)
* identification of the product (bar code reader?)
- acquisition of garment image (from server side catalogue?)
* acquisition of garment image (from server side catalogue?)
- generation of combined person/garment image and display on phone
* generation of combined person/garment image and display on phone


We would expect that the processing for this would be partially server
We would expect that the processing for this would be partially server

Latest revision as of 14:47, 1 September 2011

Student design project carried out for Clifford Dive at Qualcomm.

The original design brief was:


Some people find it extremely difficult to imagine how they might look in an item of clothing on the rails of a clothing store. We are looking for a solution that allows users to visualise themselves in particular garments and combinations thereof without the effort of multiple visits to the changing room - this visualisation will appear on their mobile phone, like an electronic Barbie doll. There are several elements to this project:

  • acquisition suitable image of user
  • identification of the product (bar code reader?)
  • acquisition of garment image (from server side catalogue?)
  • generation of combined person/garment image and display on phone

We would expect that the processing for this would be partially server based.