Different Kind of Hacking
I hack quite a bit of code, both for fun and funding, but this weekend I hacked up a zero-budget video demonstrating the use of orthogonal fisheye transformation as a surrogate for a big screen on handhelds. Fortunately there was some programming involved, but mostly my time went to battling against the heat strokes of my computer and ADSL modem, a malicious sound card, various video formats, and the lack of satisfactory FOSS video editing software. Oh, and since didn't have a microphone I had to use my digital camera's dictaphone feature. But that caused a lot noise and clicks I have to edit out some day.
In away all this was fun, and should I have a lot of extra time it would be fun to really delve into this subject, study video formats, more than basic computer graphics, solid modelling, blender, and make these tools as easily approachable to layman as a pen for writing.
In away all this was fun, and should I have a lot of extra time it would be fun to really delve into this subject, study video formats, more than basic computer graphics, solid modelling, blender, and make these tools as easily approachable to layman as a pen for writing.
3 Comments:
Nice! Does anybody have good links to other crazy solutions for both bandwith & screensize limited user interfaces to the Web, or similar systems? I always thought that a synchronized cache of hash-indexed blocks of data and sending diffs-only to the device would be nice. I have a suspicion that this could compress 90% of my Web traffic, or more.
Note that it is actually not so difficult to read a document through a sheet of paper with a relatively small hole, especially if you can freeely move both sheets with your hands along three spatial axels. So this small screen problem is as much ergonomics of (hand) control, than rendering the screen in a clever way.
BTW, I have always wondered, if it would make sense to render remotely, then send only the relevant part of the screen to the terminal. With Web 2.0 pages with tons of JavaScript, this can save a ton of bandwith. Of course you need to send the clicks back to the remote JavaScript engine, render again remotely, and update the terminal.
You could also render progressively: first with 1 bit pixels (B/W), then greyscale, then colors. And simultaneously increasing the resolution, second by second, until the user changed the picture, when you re-use as much data as you can and start to redraw.
I know we have broadband coming to all terminals, but it just bugs me to send irrelevant information over, something that the user will actually never see. And it bugs me to resend the same bytes, kilobytes and megabytes over and over again over the same channel.
I'ld guess most PDA developers, as well as the Maemo" and Qt folks must have spent much time on developing a web browser for small screens. But their solutions come far short of "crazy" ;-)
Regarding bandwidth limitation, I know the WAP and WBXML guys spent quite a bit of effort in compressing the data stream from the server to the client, but I haven't heard solutions based on sending diffs to apriori known blocks - I guess the challenge is in the server trying to figure out which block would result in the smallest diff. I don't know of a significantly sublinear algorithm (in terms of number of known blocks) for doing that.
Rendering remotely might be possible, and VNC and some other systems do it with significant success in most cases. The significant drawback might be in responsiveness and realtimeness. Remember that AJAX and some other techniques try to move the computation even closer to the user, away from servers, with the express intent of coping with responsiveness.
Post a Comment
<< Home