How do you allow consumers to modify the software that’s on their desktop computers — to be able to take parts of Windows, iTunes and Photoshop and put them on the same screen — without having the entire legal departments of Microsoft, Apple, Adobe and other companies knocking on their doors?
The answer may lie in Prefab, a simple, elegant solution developed at the University of Washington by James Fogarty, an assistant professor of computer science and engineering. Because Prefab works at the pixel level, and not on the source code, those high-priced lawyers in Redmond and Cupertino shouldn’t have any issues with licensing agreements clicked on by consumers, and the executives those lawyers work for at those big software companies should consider that Prefab can bring them new customers, says Fogarty.
“There’s been 30 years of research to try to make software more accessible for those with disabilities,” Fogarty told TechNewsWorld. “The idea is that we can write it once and it will work on all these applications. You as a software developer would get that new population (of consumers), that new accessibility to these people for free from us, and you would just have to write the software once.”
One example is the “bubble cursor,” a way for those with Parkinson’s disease and other neurological disorders to interact with their computers without having to land a mouse directly on a destination; they could land in the vicinity of what they are wanting to click on, and a potential way of communicating is suddenly available to them.
However, Fogarty also sees other applications that could make life a lot easier for those writing versions of their software for mobile devices. “If you want to make software for a mobile device, you have to throw away the existing code and write a new application for that mobile device, and if there are six different devices you have to write it six times for each device,” Fogarty said. “We can provide it so that you can run your existing application in the cloud somewhere and automatically generate an interface that runs on your phone. We’re trying to break down the differences in what a particular interface looks like and what that underlying functionality is.”
TechNewsWorld recently interviewed Fogarty in his offices in the Paul Allen Computer Science and Engineering building at the University of Washington in Seattle. He talked about the ideas that helped him to build Prefab.
James Fogarty: I’ve done a bunch of research in the past 10 years, and there’s always been this interesting challenge of how to actually get it out to people — how to transition those ideas into everyday software. And that’s been hard as one person in a lab, or two people or five people in a lab. You can’t for example implement an entire massive commercial software project to add your idea to it. So this idea came from how we could add new capabilities to existing complex pieces of software, and in part it’s also a selfish motivation, to be able to have more impact with the research, so I think it’s got more broader applications as well.
TechNewsWorld: The bottom line here is that I’ll be able to customize the software that I’ve got, that I see on my computer monitor, and run apps from a bunch of different parties in the same toolbar. Am I simplifying it or is that pretty much it?
I think that’s one example of something you an do with this. The thing that our software does is, it sees your computer the same way you do. So just like we’ve seen mashups on the Web from people being able to see different parts of the Web and pull them together, you can see mashups on the desktop because we’re exposing representations that allow people to do that, and it just hasn’t been possible before. So combining functionality from different software, adding new data, adding new functionality to single pieces of software, are all different options.
TNW: But you’re not tinkering with the source code, are you?
No, we’re only working at the pixels, what you can see, I can see, and that’s actually really important because that’s why it can work — because there’s so many different ways to implement an interface, if we try to account for every single one of them, we run into the fact that we don’t have the source code for this one, or that there’s all these different ways to make it work, whether it’s in java or something else, or in Flash or on the Web. But everything ultimately puts pixels on the screen, and so because we only work from that, we have one common way of interacting with things that allows us to make these changes.
TNW: In terms of what you’ve been able to find out about what you can legally do with licensed software on your own computer, there shouldn’t be any real hurdles here, right?
Well, it’s a fundamentally new technology, so it’s an interesting question, and it’s one that’s important to us. We think it’s important that you still have the full software running on your machine, it’s still a fully licensed piece of software, and we’re just working with the pixels. So currently right now, if you’re a person with low vision, there’s a magnifier application you can work with in Windows that makes the pixels bigger and easier to see. We’re doing the same thing, we’re changing the pixels so that it’s easier for you to work with, and we’re not modifying the software.
So…. let me get this straight, I build out an app on this pixel manipulation API, and now I’m more or less permanently locked into a particular version and build of a piece of software I own? >>Yawn<< This seems less revolutionary and more retro to me:
Unless I’m missing something, it’ll have all the same down-sides as screen scraping too. The only real difference is "data resolution"… in the 1980s it was text characters off of a green screen mainframe somewhere…. in 2010 it’s pixels on your LCD monitor.