The commands are actually interpreted by Apple’s servers, and then intercepted by a proxy I have running locally. I am then simply using Twitter gem to make the post. Confirm and cancel work both by voice and by clicking the onscreen buttons. You may notice it appears the resutls are coming from WolframAlpha. I just used this response format as it allows you to return arbitrary text and images, as well as a confirmation. There is no interaction with wolfram’s servers.
This is much more elegant than the sms message workaround for posting to twitter that has been shown before (but requires a lot more setup)
Finally got around to sifting through the hundreds of photos I took while in New Mexico visiting friends. There were many bad pictures, many ok ones, many that are great snapshots/memories but not great “photographs”, picked out 41 that I quite liked.
This one was shot just after the sun went down. The long exposure, in addition to making day from night creates an awesome blur from the motion of clouds (even in just 30 seconds).
This one is a pseudo-HDR photo. Its stitched together from 3 different photographs, and then bumped up the exposure a bit on only the foreground
This dog was extremely shy, but managed to get a few good shots of him anyway
You can see the rest of my favorite in the full set or slideshow below
While I was in NY last week I went full on tourist mode, camera around my neck looking up at all them tall buildings. Most of the photos were so-so, widdled down toones that were at least presentable for flickr.
There were 2 in particular I quite like:
The first, I actually shot somewhat by mistake. I really like the typical ”pissed off new yorker” look on the subject’s face, combined with dangling cigarette and iconic white earbuds. Chose to go grayscale here as the background colors distracted a bit from the main subject.
I also quite enjoyed this next photo, because it looks as if both of them are looking at each-other as if they dont belong.
The rest of the photos in the set are not too noteworthy, But il throw them here as well for posible critique — see full set
Just got my new DSLR (Canon t1i). It just so happens it is also the only camera I have owned that doesn’t also make phone calls. The technical aspects of photography are what has peaked my interest, really enjoying it so far, but I currently have no idea what I am doing, though I did manage to take all these photos in Manual mode, and the exposures seem reasonable to me.
If you know a thing or two about photography, tell me why my photos suck (besides the boring subject matter), so that I might learn something
I just happened upon this TED talk and I must say, I don’t think I have ever seen a talk that more closely mirrors my feelings on a subject.
Throughout my entire education I hated math. Even though I often found the concepts incredibly interesting (and often they even came very intuitively to me), I absolutely abhorred the manual computation. I never quite understood why I should be doing manually the things that my calculator was perfectly capable of doing for me (if not by default, with a simple TI Basic script). In fact, I found some of my best learning occurred when I ignored the sprawling computation unfolding on the blackboard, and instead spent my class time writing a calculator program to solve the problem for me. Unfortunately for me, my grades wren’t dependent on wether I understood the concepts or not, just wether I could mindlessly perform the calculations. It bothered me to no end that people who clearly had no grasp of any of the concepts (but took the time to work through enough problems that they could mindlessly reproduce the series of steps required of them) were getting better grades than me. (Im not really sure I agree with our whole grade based system either, but that is a whole other issue)
I will admit, in elementary school I also thought the same thing of basic arithmetic. “Why do I need to be able to add and multiply when my calculator can do it for me?” – Though it is true that my calculator can do basic arithmetic for me, it is (as mentioned in the talk) often times still more convenient to be able to do certain calculations mentally, especially when estimating (again, mentioned in the talk). In hindsight, I do wish I had spent more time learning this basic arithmetic rather than writing it off as I did other more complex computation, because I do find that (unlike the complex computation) it does effect me still outside of academia. (I can do it, it just frustrates me when I have to pause on some mental arithmetic that should be instant)
To be clear, I am not (nor do I think the talk was) suggesting that we completely eliminate the concept of hand computation from education. There are certain concepts that i’n sure are best understood by working them out by hand (at least at first), but our entire system is currently centered around it, and that just seems backwards… it encourages people to focus on the insignificant details, often completely missing the big picture.
I truly believe that if it wasn’t for all senseless hand computation that was shoved down my throat, not only would I now enjoy math (instead of hating it), but I would also have a far deeper and more useful understanding of it.
One last kinect project before the weekend is over.
Controlling mario kart with Microsoft kinect.
I use the depth information to detect the position of both of my hands as well as the position of my leg. These inputs are then mapped to keyboard keys to be used as input in other applications. In this case an SNES emulator.
Apologize for how poorly I play, its quite late, so not much time to practice with this new input method
- Will post another video tomorrow once I have had more practice.
A threshold is used on the depth-map to filter out everything but my hands, and then blob detection is used to locate their centers. This information is then used to scale and rotate an onscreen object.
Note that because the Kinect provides depth information, the object can be rotated on both its Z and Y axis. With a bit of work, a gesture could theoretically also be made to rotate along the X axis.
* sorry about the flickering, this is an artifact of screen recored I am using, and is not visible in actual use
This is just a quick demo showing background removal, using both threshold as well as a captured depthmap (and image) of the empty scene.
Any foreground objects are then (arbitrarily) moved forward in the scene, and due to the motion, a paralax effect can be observed.
If the depth information were to actually used in offsetting the foreground object, a pretty convincing effect could be achieved for applications the dont require too much depth in the scene, like a head tracking effect.