How I See The World - Major Project Update / Research
The Question:
How does nearsightedness, farsightedness, and a stigmatism affect the typographic form at different set distances? Can there be a typeface designed to correct optical distortions? Can the data be used to accurately replicate the optical blur and distortion in other situations? Can this information then be put into application form using Apple's iPad in which the user can share their custom tailored vison experiment with others
Concept:
The user holds the iPad and snaps a photo of anything around them they want to use for the test. They then remove their corrected vision ie. glasses / contacts and then they look at that same object and match the level of blur that they saw without their glasses after putting them back on using the iPad application. The app would utilize sliders to interface with the iPad and make the corrections. Next the iPad uploads (online calculations) or crunches (internal process) the data and then the user would get their personalized results. These results can then be shared with friends and family or other users to show and compare the way that you each see the world. The user you send your to data can then take a different photo and plug in your data or select your results (or profile if it saves friends profile data) and view the photo that they took with your same vision results.
Social Integration:
The user will be able to share their personalized vision "settings" with other users who also have the app. This would allow us to show a friends list of sorts or profile list that the user could pick through to try out several friends settings. Consider it a modern way to take off your friends glasses. So that other people can see the world through their eyes.
The user has the option of printing off their own personalized set of letterforms based on their vision settings. It could spell out their name or whatever else they wanted it to say. This could be e-mailed to the user or a friend. It could also be printed off using Apple's Air Print system. There could also be an online gallery that you and your friends can upload to and share online with.
Prototype:
The user holds the iPad and snaps a photo of anything around them they want to use for the test. They then remove their corrected vision ie. glasses / contacts and then they look at that same object and match the level of blur that they saw without their glasses after putting them back on using the iPad application. The app would utilize sliders to interface with the iPad and make the corrections. Next the iPad uploads (online calculations) or crunches (internal process) the data and then the user would get their personalized results. These results can then be shared with friends and family or other users to show and compare the way that you each see the world. The user you send your to data can then take a different photo and plug in your data or select your results (or profile if it saves friends profile data) and view the photo that they took with your same vision results.
Social Integration:
The user will be able to share their personalized vision "settings" with other users who also have the app. This would allow us to show a friends list of sorts or profile list that the user could pick through to try out several friends settings. Consider it a modern way to take off your friends glasses. So that other people can see the world through their eyes.
The user has the option of printing off their own personalized set of letterforms based on their vision settings. It could spell out their name or whatever else they wanted it to say. This could be e-mailed to the user or a friend. It could also be printed off using Apple's Air Print system. There could also be an online gallery that you and your friends can upload to and share online with.
Prototype:
We will be using the iPad to create our prototype of how the application would function with the results of our experiment. Hypothetically they would be created using iPad rather than MBP.
Functioning Details:
The images will be replicated using photoshop via MBP. (we tried the iPad & iPhone PS Express but it lacked any blur / scripting capability)
Functioning Details:
The images will be replicated using photoshop via MBP. (we tried the iPad & iPhone PS Express but it lacked any blur / scripting capability)
Typefaces Explored:
We picked different typefaces to test with (ones that people interact with on a daily basis e.g.
We picked different typefaces to test with (ones that people interact with on a daily basis e.g.
- Helvetica
- Interstate
- Futura
- Trade Gothic
- Garamond
- Univers
Interesting Fact: "Helvetica is everywhere. It is the standard typeface of hospital signage systems, medicine labels..." -http://www.frieze.com/issue/article/typecast/
Technical Details:
1-3ft farsighted
6-12ft nearsighted
The stigmatism would be a radical.
Using these mesures we can show the near and far side.
Consider the zero point at 6ft. Farsightedness being -6 and nearsighted being +6
Technical Details:
1-3ft farsighted
6-12ft nearsighted
The stigmatism would be a radical.
Using these mesures we can show the near and far side.
Consider the zero point at 6ft. Farsightedness being -6 and nearsighted being +6
Final Artifacts:
The iPad application.
The image data.
A poster of all of the data visualized.
No comments:
Post a Comment