Phase Two

Internet Innovation Startups

emergentfutures:

fastcodesign:

Apple’s “Transparent Texting” Could Make Typing And Walking Safer
If you’re walking, you really shouldn’t be texting. While not as perilous as texting and driving, there’s no surer way to annoy fellow pedestrians than by zigzagging across a sidewalk, eyes glued to your precious screen. But if you absolutely must walk and text, Apple might have a new feature that could make that action safer.
More> Co.Design

Paul Higgins: A solution looking for a problem. Just don’t walk and text. be a little more in the world

emergentfutures:

fastcodesign:

Apple’s “Transparent Texting” Could Make Typing And Walking Safer

If you’re walking, you really shouldn’t be texting. While not as perilous as texting and driving, there’s no surer way to annoy fellow pedestrians than by zigzagging across a sidewalk, eyes glued to your precious screen. But if you absolutely must walk and text, Apple might have a new feature that could make that action safer.

More> Co.Design

Paul Higgins: A solution looking for a problem. Just don’t walk and text. be a little more in the world

prostheticknowledge:

Deep Learning
Computer vision research project at the Purdue University is developing software that understands the world and objects that it sees:

Researchers are working to enable smartphones and other mobile devices to understand and immediately identify objects in a camera’s field of view, overlaying lines of text that describe items in the environment.
"It analyzes the scene and puts tags on everything," said Eugenio Culurciello, an associate professor in Purdue University’s Weldon School of Biomedical Engineering and the Department of Psychological Sciences.
The concept is called deep learning because it requires layers of neural networks that mimic how the human brain processes information. Internet companies are using deep-learning software, which allows users to search the Web for pictures and video that have been tagged with keywords. Such tagging, however, is not possible for portable devices and home computers.

More Here

prostheticknowledge:

Deep Learning

Computer vision research project at the Purdue University is developing software that understands the world and objects that it sees:

Researchers are working to enable smartphones and other mobile devices to understand and immediately identify objects in a camera’s field of view, overlaying lines of text that describe items in the environment.

"It analyzes the scene and puts tags on everything," said Eugenio Culurciello, an associate professor in Purdue University’s Weldon School of Biomedical Engineering and the Department of Psychological Sciences.

The concept is called deep learning because it requires layers of neural networks that mimic how the human brain processes information. Internet companies are using deep-learning software, which allows users to search the Web for pictures and video that have been tagged with keywords. Such tagging, however, is not possible for portable devices and home computers.

More Here

(via futurescope)

fastcompany:

Mind Reading Comes One Step Closer To Reality With The Glass Brain
What if you could see inside someone’s mind? It’s not possible to know exactly what another person is thinking, but neuroscientists from UCSD and UCSF are on their way. They created a “glass brain”: software that shows a person’s brain reacting to stimuli in real time. The implications for virtual reality and digital communication are tremendous, according to Philip Rosedale, the founder of Second Life, who has been collaborating with the neuroscientists.
“We’re trying to identify which critical factors can most help people feel like they’re face to face,” says Rosedale, whose new company, High Fidelity, is currently working on a next generation virtual world.
More> Co.Create

fastcompany:

Mind Reading Comes One Step Closer To Reality With The Glass Brain

What if you could see inside someone’s mind? It’s not possible to know exactly what another person is thinking, but neuroscientists from UCSD and UCSF are on their way. They created a “glass brain”: software that shows a person’s brain reacting to stimuli in real time. The implications for virtual reality and digital communication are tremendous, according to Philip Rosedale, the founder of Second Life, who has been collaborating with the neuroscientists.

“We’re trying to identify which critical factors can most help people feel like they’re face to face,” says Rosedale, whose new company, High Fidelity, is currently working on a next generation virtual world.

More> Co.Create

vimeo:

Behind every toy there is a creator and a 3D printer. Learn how technology makes future playthings possible in this film by @efranfilms for @Intel’s Empowering Innovators series. #lookinside »> http://bit.ly/1mddvsI

thisistheverge:

Apple’s CarPlay puts iOS on your dashboard As was rumored on Friday, Apple is today finally ready to launch a new iPhone integration setup for car infotainment systems. Calling it CarPlay, the Cupertino company claims it’s “designed from the ground up to provide drivers with an incredible experience using their iPhone in the car.” CarPlay is built primarily around the use of Siri voice commands and prompts, providing an “eyes-free” experience where you can respond to incoming calls, dictate text messages, or access your music library. It’s also predictive, claiming to know where you’ll most likely want to go based upon addresses found in your email, texts, contacts, and calendars. Apple’s Maps are also an integral part of the service, which was previewed back in June of last year.

thisistheverge:

Apple’s CarPlay puts iOS on your dashboard
As was rumored on Friday, Apple is today finally ready to launch a new iPhone integration setup for car infotainment systems. Calling it CarPlay, the Cupertino company claims it’s “designed from the ground up to provide drivers with an incredible experience using their iPhone in the car.” CarPlay is built primarily around the use of Siri voice commands and prompts, providing an “eyes-free” experience where you can respond to incoming calls, dictate text messages, or access your music library. It’s also predictive, claiming to know where you’ll most likely want to go based upon addresses found in your email, texts, contacts, and calendars. Apple’s Maps are also an integral part of the service, which was previewed back in June of last year.

thisistheverge:

Blurred lines: data project shows popular running routes in 22 cities An age of inexpensive wearable devices that track our every move, and having plenty of places to post that information publicly has resulted in a perfect mix of data that shows where people are exercising. Nathan Yau over at Flowing Data has done just that, taking public running logs from RunKeeper and stacking them up over maps of the cities. The result is a collection of maps for 22 cities (most of which are in the US) that shows what routes get the most foot traffic.

thisistheverge:

Blurred lines: data project shows popular running routes in 22 cities
An age of inexpensive wearable devices that track our every move, and having plenty of places to post that information publicly has resulted in a perfect mix of data that shows where people are exercising. Nathan Yau over at Flowing Data has done just that, taking public running logs from RunKeeper and stacking them up over maps of the cities. The result is a collection of maps for 22 cities (most of which are in the US) that shows what routes get the most foot traffic.

(via emergentfutures)

Page 1