In the January 9th New York Times Fashion & Style section, there was a charming article by Mandy Len Catron called “To Fall in Love With Anyone, Do This“.  In it, she talks about some research published back in 1997, in which professor of psychology Arthur Aron and his co-authors describe a method for inducing interpersonal closeness between strangers.

Catron writes:

I explained the study to my university acquaintance. A heterosexual man and woman enter the lab through separate doors. They sit face to face and answer a series of increasingly personal questions. Then they stare silently into each other’s eyes for four minutes. The most tantalizing detail: Six months later, two participants were married. They invited the entire lab to the ceremony.

“Let’s try it,” he said.

She and her partner googled up the questions and went through the procedure, which they apparently enjoyed a great deal:

You’re probably wondering if he and I fell in love. Well, we did. Although it’s hard to credit the study entirely (it may have happened anyway), the study did give us a way into a relationship that feels deliberate. We spent weeks in the intimate space we created that night, waiting to see what it could become.

Adorable, right?  Anyways, it struck me that other folks might want to try the experiment themselves, and it’d be helpful if the whole thing was all together in one place, accessible from a smartphone so that it would be easy to try with a date, for instance.

So I had a few extra hours over the last few evenings, and I threw together a little mobile-friendly, touch-friendly web app version of the experiment.

I took the 36 questions from the original study directly, along with a slightly modified version of the instructions, and I included a little timer that will count down four minutes for the eye contact exercise.  The whole thing is just HTML and Javascript, and weighs in at a little over 260 lines of code, all told.

You can try it yourself here:

http://www.ianmonroe.com/instantcloseness/

I’ve posted the code on Github, so feel free to fork it and make improvements if you feel so inclined.

“Nothing should be off limits to discussion. No, let me amend that. If you think some things should be off limits, let’s sit down together and discuss that proposition itself. Let’s not just insult each other and cut off all discussion because we rationalists have somehow wandered into a land where emotion is king.”

http://ift.tt/1n32d5L

Nothing should be off limits to discussion. No, let me amend that. If you think some things should be off limits, let’s sit down together and discuss that proposition itself. Let’s not just insult each other and cut off all discussion because we rati…

Click here for the full article

Posted from Facebook

Something I’ve written a little bit about before.

From the article:

“There is by now evidence from a variety of laboratories around the world using a variety of methodological techniques leading to the virtually inescapable conclusion that the cognitive-motivational styles of leftists and rightists are quite different. This research consistently finds that conservatism is positively associated with heightened epistemic concerns for order, structure, closure, certainty, consistency, simplicity, and familiarity, as well as existential concerns such as perceptions of danger, sensitivity to threat, and death anxiety.”

Click here for the full article

Posted from Facebook

You want a physicist to speak at your funeral. You want the physicist to talk to your grieving family about the conservation of energy, so they will understand that your energy has not died. You want the physicist to remind your sobbing mother about the first law of thermodynamics; that no energy is…

Click here for the full article

Posted from Facebook

Excellent web-based e-book discussing modelling cognition probabilistically and generatively.

With live code samples to mess around with to illustrate what they’re talking about.

Extremely interesting stuff, and not too dense. If you’re interested in making or learning about intelligent software, this should keep you busy for a little while.

In this book, we explore the probabilistic approach to cognitive science, which models learning and reasoning as inference in complex probabilistic models. In particular, we examine how a broad range of empirical phenomena in cognitive science (including intuitive physics, concept learning, causal r…

Click here for the full article

Posted from Facebook

“The marks on his arms weren’t the tell-tale signs of heroin addiction; they came from where his captor, a ruthless modern-day vampire and also a local dairy farmer and respected landowner named Papu Yadhav, punctured his skin with a hollow syringe. He had kept the man captive so he could drain his blood and sell it to blood banks. “

Click here for the full article

Posted from Facebook

Letters printed in living cells

(Note: This article originally appeared on Acceler8or.com on July 31, 2012.  It’s republished here for my own archives.)

Scientists in Canada have invented a device they claim can print large patches of living tissue.

In an article which appears this month in the journal Advanced Materials, Axel Guenther of the Department of Mechanical and Industrial Engineering, and Milica Radisic of the Department of Chemical Engineering and Applied Chemistry at the University of Toronto detail a machine they’ve created which can precisely print living tissue to order.

Their device uses biochemicals to create layers of “mosaic hydrogel,” a substrate into which living cells can be precisely deposited, like agar in a petri dish. The placement of the cells is so precise, the scientists were able to print the word “Toronto” on to the substrate.

But beyond manufacturing single layers of the tissue, by collecting layers of the printed tissue material, the scientists were able to build three-dimensional structures of substantial thickness.

It isn’t yet commercialized, but Guenther has bold plans for the technology. “My laboratory is currently pursuing different applications of the technology—different tissues,” Guenther said in a press release. “But one of my dreams is to one day engineer a vascularized leaf – perhaps a maple leaf.”

Needless to say, a system for printing living tissue on demand could have enormous ramifications for future biotech. Low hanging fruit? Generating new skin for treating burn victims, growing custom organs for transplant, or synthesizing tissues specifically designed to produce new medicines. But paired with recent advances in synthetic biology, there may an amplification effect which we can scarcely imagine now.

Only time will tell.

Image Courtesy of: gesturetek.com

(Note – This article was first published by h+ Magazine, on May 12, 2010.  It’s republished here for my own archives.)

Chances are, you’re not using the same computer you were twenty years ago. But chances are, you’re still using the same basic user interface — a mouse for pointing, a keyboard for typing. While generations of hardware and software have come and gone, the paradigm for interacting with our machines has remained pretty much the same. But that’s slowly changing: Nintendo’s Wii system has gotten video gamers off the couch, Apple’s iPhone has acclimated us to using touch to control our devices, and a new generation of user-interface systems are beginning to come to the consumer market which promise more natural, intuitive, and engaging experiences.

Vincent John Vincent, president and co-founder of GestureTek, thinks next-generation user interfaces will be driven by movement. His company makes systems that use cameras and computer vision to watch a user’s movements, and then translate those movements into controls. “As the interfaces that we see on the screen become more dynamic and deep, with 3-D jumping off the screen or deeper into the screen, then the ability to reach out with your hand and manipulate them in that 3-D space is a much more natural way to go than just to have one point of control with the mouse,” Vincent said.

GestureTek has been building equipment and displays that respond to movement since the 1990s, according to Vincent. “We are the inventors and pioneers in this space, and luckily, early enough in it as well that we’ve been able to get a lot of patents on what we’ve done,” he said. “We’re very lucky that we were way ahead of our time for a long time, we sold thousands of installations of that technology, into museums and science centers and retail in various locations.”

“In the early 2000s, we started expanding and we created a number of different technologies that were interactive surfaces like floors and walls and windows and whatnot that would just be reacting to your motion and movement and being able to let people walk over those in front of them and just pick up your general gestures,” Vincent said. “We just found it to be very, not just natural, but engaging. It captures people’s attention, it makes the experience more entertaining and dynamic.”

And gesture controls are coming to a screen near you sooner than you might think — Microsoft’s Project Natal, which was announced at last year’s Electronic Entertainment Expo, is due to be released by Christmas of this year. Project Natal promises a 3-D, depth-sensing camera peripheral for the Xbox 360, which will use software licensed from GestureTek to enable gamers to play without any controller at all, simply by using the movements of their body. Hitachi has demonstrated a prototype HDTV that uses an embedded 3-D camera to replace the traditional remote control. Users simply wave their hand to change the channel or turn down the volume. Mobile phone manufacturers such as DoCoMo have licensed software from GestureTek as well. “We looked at the mobile phone market and said well, there is a processor, a display and a camera, what a perfect product,” said Vincent. “So we took what we had already created and evolved it even more so that we could use a phone in the hand to act like a joystick, and the camera would watch how the phone was moving in relationship to the world, or you could gesture at the phone, etc.”

The availability of depth-sensing cameras is driving the speed with which these innovations are getting into the hands of consumers. “The depth cameras that are coming to the market use infrared light and three depth sensors, so that when an array of light is pulsed out into the environment, much like an ultrasound, it bounces back,” Vincent said. “The sensors can tell how quickly it’s come back and therefore build up a depth perception of the world in front of it. It’s very similar to ultrasound but done with light. Obviously, the lenses that are capturing that information are very sophisticated and that is what has kept them very expensive up till this point in time.”

Of course, those hardware costs will drop over time. And that means one day soon, you might be able to ditch that mouse once and for all.