I love this new “binders full of women” meme that sprang up last night.
So I’m going to cash in.
So I’m going to cash in.
So this is a fun little thing.
I made a thing that goes out to Twitter, and finds instances of the phrase, “Santorum is a,” and posts it in to a big, bold ticker.
Now you can find out in real-time what the internet thinks a Santorum is.
What do you think? Funny or stupid?
Try it out here:
PS – If you’re interested, I made it with jQuery and a plugin called TweetQuote, which I modified a little.
Well, it only took me 12 months, but I have finally finished the Bibleizer script.
It takes arbitrary text as input, and censors every word which doesn’t appear in the King James version of the Holy Bible.
So tomorrow is December 1, which is the deadline for entries into the Knight News Challenge, an annual contest for innovative ideas for the intersection of technology and news, funded by the John S. and James L. Knight Foundation.
We propose building a web service which can intelligently parse and analyze text it has never seen before, and offer insight into the quality of content, along with the degree to which the filter is confident in its analysis: the “News Grade”. We envision making this available in much the same way as Akismet or OpenCalais provides their service – an open API, providing third-party developers an easy way to make use of News Grader’s analysis and quality ratings. This API will enable the easy integration of our system into a variety of formats which could include browser extensions, CMS plugins, and desktop/web applications such as RSS readers, news aggregators or social networking software.
We believe we can do most of the heavy lifting using well-known algorithms, including a modified version of Bayesian induction, the Porter stemmer algorithm, entity extraction algorithms, and manifold learning algorithms. We intend to identify and weight word clusters in an article, and compare a new article to the ratings of other articles with similar word clusters, amongst other techniques.
This automated process will be supplemented by a mechanism for users to provide structured feedback, which will allow the web service to “learn”, and which will increase the quality of the analysis provided over time. The more that people use the service, the smarter it will get. It also makes it more difficult to “game” the analysis of a given piece of content, since the machine intelligence will compare new content to other content it’s previously encountered, and will weight new feedback as only a portion of its analysis.
You can read our full proposal here. If (and that’s a big if, since there are hundreds of entries) the KNC people end up being interested in the idea, you can be sure I’ll be writing more about it when we put together a business plan and timeline for the second round of the competition.
I’ve been pretty interested in mechanisms for machine learning lately, and since the KNC folks were specifically looking for some proposals about authenticity, trust, and content discrimination, I thought this idea might be up their alley.
Of course, some folks think that open-ended machine learning systems are an all-too-common startup idea which never seems to quite work out. To those folks, I’d like to point out that useful expert systems have been relatively rare until pretty recently, simply because doing it right is computationally intensive.
Furthermore, I think it’s worth noting that where this sort of system has worked in the past, it’s worked really well. Netflix, for instance, paid out a million-dollar prize in 2009 for improving their recommendation algorithms by a mere 10%. Amazon relies on it’s recommendation system as a driver of sales. There’s just no question that systems like these can work; the only questions are what do you want to measure, and how do you use the information?
Netflix and Amazon want to predict what an individual will think about a particular recommendation. Will you buy it? Will you like it? And that’s a great idea — it drives commerce on these sites, and makes them more useful for users.
But the questions we’re interested in answering don’t rely on personalization; we’re not so much interested in what a particular user cares about. We’re interested in predictive modeling of things like bias, completeness, and novelty, independent of the tastes of a particular user. The question we’re asking is, “Can we discover good journalism, regardless of subject matter?”
We think the answer is going to be that we can. We won’t know until we actually run the experiment, however. As far as I can determine, nobody’s ever tried precisely what we’re proposing vis-a-vis journalism on the web. Only time will tell if we get the opportunity to try.
**Update, 1/12/2010: The Knight Foundation declined our proposal. Anyone want to fund the idea? Otherwise, I’d say it’s dead in the water. **
Another diabolical invention.
Here in the midwest, there’s a local custom of playing a beanbag-tossing game known as “Cornhole”. If you ever find yourself in a tailgating situation around here, you’ll see lots of people playing it. In fact, it’s so popular, there’s even a group called the American Cornhole Association that’ll tell you the official rules, and hosts tournaments. They even have a website at playcornhole.org.
So I thought of a game that could be played as a supplement to cornhole. I’m calling it “Kick My Balls”.
The principle is pretty simple:
You use a standard basketball and basketball hoop attached to a plywood backboard. The net is tied at the bottom to hold the ball. The ball starts out cradled in the net. The player kicks the ball through the bottom of the net, and tries to get it to land back in the net. If the player is successful, they are assigned points based on the height the ball reached. If the ball lands outside the net, no points are awarded, and the player has to run after and retrieve the ball.
P.S., since this is the second invention I’ve posted, I’m going to make a new category for them.