slowmoVideo is a slow motion video editor that allows for slowing down footage by large factors, calculating intermediate frames using optical flow. It's currently avaliable for linux, and you'll need an nVidia card to try it out.
slowmoVideo downloads and sources
Wow!!! Looking at the other demo videos that's amazing!!
I'm going to have to get Linux installed now. (oh and record some video to test it with)
Coolio! Hopefully I'll have enough space on my 40gb external linux hard drive :P (For that and the Dream Studio :D)
i don't see the point on this Sorry. That can be made easily scaling up the strip in the video editor. Also in order to show how good this could be I would suggest an action Scene, something that we could really look at and notice a huge difference, the pendulum of a wall clock won't do a good publicity.
@ AnimaticoideX, it would be for live action video, blender does do green screen.
@ john mervin: hmmm......
The cloud timelapse comparison video demonstrates the power of slowmoVideo much, much better than that clip.
Simply slowing down the original video won't work if FPS is low, or if there's lots of slowing. While the cloud video isn't perfect, it could fool many people, even when it played TEN TIMES slower than the original. Superb!
@AnimaticoideX check out the other sample vids for more action.
As for being able to do it in blender, yes for animations, but just scaling up live footage gives very choppy results. This software interpolates where the time is stretched. One of the other videos shows this nicely. I did notice a few artefacts where the original motion was too jerky (most notably a tree in the cloud video), but overall it does a pretty impressive job.
Alright that is grand, I have been after an open source program to do this for ages :-) and now I have one, time to start running some live action test shots :-) this has made me super excited :-D
@Animaticoid - Just look at the cloud demo video, particularly the part titled "Difference to simple time stretching." This shows the difference between what you'd get from Blender Vs what you'd get from sloMoVideo. Blenders VSE creates the inbetween frames by just copying the first frame without changing it, so if you slow the video down by 10 times then everything freezes for 10 frames and then "jumps" (like in the time stretching part of the cloud demo video). This creates the inbetween frames with a process more like morphing so each frame is an educated guess of how it would look in reality, creating a much smoother result.
ooh nooo... i have no nvidia GPU..
Any possibility for a Windows build? I could really use this for a project I'm working on.
I want this in the Blender VSE for time stretching and motion blur :)
AnimaticoideX: The example with the pendulum behind glass(!) is a very good one because that's a difficult situation for most time stretching algorithms...
Another cool demo, and another hasty "what's the point" snap-judgement. Must be Saturday!
Seems that way...
The guy's thesis on the program is on his site for download. People should read that for all the gritty details of what exactly this program does. Simply throwing video into a sequence editor and squashing/stretching its playback time is completely different.
Its like shunning a Ferrari because a donkey+cart 'does the same job exactly the same'
Can someone (?) build this for windows?
Looks really cool and stuff, but I'm just not into compiling sourcecodes ...
I watched all the videos, it clearly shows its power. It works directly on the already existing frame images. However my point was about rendering animations, I said Scaling Factor on the sequence Strip, which is different to Applying Speed Control Effects to your final image sequence, which obviously gives choppy results because there is not pixel alteration among frames.
This is a video of 1X 5X 10X using scaling factor before rendering that's what I thought.
Wish this is available for Windows 7 to give it a try, seems very interesting.
This is just amazing!
Finally I can execute some of my time-lapse projects. Unfortunately my laptop has no Nvidia, but as soon I reach my home machine I'll start testing.
If this really does what's claimed there's a huge point to it. The pendulum transitioning across the edge of the window confuses other fluid flow algorithms - for example, try it with Avid's Fluid Film - and the quality of the original footage is poor enough to make it even harder.
I have just been dealing with audio sync issues with a recording session video. I had to combine HD footage from seven different actual recording sessions of the same song with the final mix. Because the performances varied, so did the timing. Using other fluid flow tools it took me two days to clean up the artifacts introduced by the varispeed process for a 3 minute clip.
I'll certainly be giving this one a try.
Just checking out the demo footage shows similar artifacts to other fluid flow algorithms. In particular, look at the smoke slo-mo as the heavy loop crosses the lighter one. You'll see positional ambiguities which would need to be cleaned up by hand before it could be used.
The other posts all show similar issues, except for the pendulum. It leads me to believe that one has been cleaned up.
I'll still give it a try, though.
If Sam Peckinpah was alive so see this he would get a never ending erection!
Wow, this sure would be useful to have in Blender for the MANGO project.
too bad it need nvidia to work i hope that someone create this for blender !