Advertisement

You're blocking ads, which pay for BlenderNation. Read about other ways to support us.

Give your feedback for improving Blenders Sequencer (Non linear film editing)

119

Sequencer screenshotThe primary developer of Blenders Sequence editor Peter Schlaile has requested feedback and suggestions for improvement of Blenders Sequencer (referred to as a NLE - non linear editor in most other software).

The wiki page for suggesting features and giving feedback is here

Blender Sequencer Feedback

The page lists both near term planned improvements, a list of already suggested improvements and changes, and gives space for further suggestions.

If you don't have edit permissions on the wiki, register an account here

Blender Wiki Signup

Then email your user name to LetterRip AT gmail DOT com for edit permissions.

You will then receive an email back confirming you have edit permissions and then you can add your suggestions.

119 Comments

  1. I can recommend having a look at vegas video, it's the most usable video editor I've worked with. One reason is you can drag files straight into it. It also has realtime video fx, so no pre rendering and the thubmnail view on the video strip is excellent.

  2. Hi!

    For my own, I'm a user of Magix Video Deluxe 2006 Plus.

    I use Magix for video editing almost since it exists.
    It is a very friendly software with professionnal features at a very affordable cost: about 50 Euros in the standard version with 16 tracks supporting all kind of medias (image and sound) or effects (SFX, Transitions, Titles), and about 90 Euros for the Plus version (32 tracks and more features).

    This said, I use Blender video editor for some effects that are not available in Magix, like the Glow effect. I also use it to mix scenes and video, which is only possible inside Blender.

    As I like a lot the Blender Glow effect, I'd want to have more amplitude available in the setup of this effect, to get stronger halos from low lights or small sources.

    Having a video editor inside Blender is a good thing, but as there are efficient video editors at low cost, I wonder if spending too much time on the NLE part of Blender isn't a waste of time.

    It is only my humble opinion, but I think that the energy of the developpers would be better employed if concentrated on the
    main purpose of Blender : Modelling and animation.

    Users of other 3d softwares will probably not switch to blender because of it's NLE, but because it is a very handy, efficient and complete 3D package.

    Philippe.

  3. It would be great if you could hear your audio tracks when using the Mouse Recording feature. Right now, you have to listen to your audio using another application and then fiddle with alignments afterwards.

  4. i prefer vegas aswell..
    @roubal: i think you are right, i would invest more time for other things as for the NLE.
    There are a lot cheap (or free vdub) video-editor solutions. Why should i use the kind of unhandy nle editor for compositing.
    also if i got 2 edit a wav-file i would use soundforge or wavelab, they got fast workflow for that.

    Blender is great, but i think its not good to make it an universal-tool, it will make blender more unstable and hard to use.
    I would prefere a more consequent workflow/GUI.
    for example holding down right mouseButton have different behaviour in 3dView and nodeEditor.. But Space works in both.. there are a lot of these miniGUI bux.
    Also the modifiers need a lot of work...

  5. @@roubal&dave62:
    "...there are efficient video editors at low cost, I wonder if spending too much time on the NLE part of Blender isn’t a waste of time."
    "There are a lot cheap (or free vdub) video-editor solutions. Why should i use the kind of unhandy nle editor for compositing."
    This may be true if you are on mac or windows, but on the linux side there are very few stable NLE's. Blender has become the only usable video editor on "odd" hardware platforms such as PPC.

  6. Regarding comments about diverting developer attention to other areas of blender, in my opinion the reasons people start to develop in OSS / free applications is because that is the area they are interested in either to add functionality to an application for their own benefit / workflow as well as for the community, they are interested in 'developing / maintaining' in the area of their expertise, interest, or where they have good knowledge of the application code, to suggest that they should divert their attention elsewhere is wrong.

    Sure blender needs work everywhere and we can all suggest what a developer should concentrate on from our own perspectives but all developer interest should be welcomed and encouraged by the community.

  7. I really like vegas too! I don't I have time to explain every details right now but let's say that the main advantages are:
    - Can do lots of stuff in just one click (really fast workflow)
    - It's a very intuitive program. You just need 30min to learn how to use it!

    One thing that is really hard to use in blender sequencer right now is alpha between clips. You wasting lot of time to select clips in the right order, select the good alpha effect and redo the whole thing if you didn't choose the right alpha FX.
    A alpha propretie in the Srip Propreties can improve the workflow a lot!

  8. My pipe dream? Node-based effects for NLE.

    I'd love to have node graphs for strip effects. Entering edit mode for a strip in NLE would create a simple input>output graph datablock where I could make use of all the great color correction/effect chains. The datablock could of course be applied on groups/metastrips.

  9. a small overview about the whole film would be great. i mean, like the navigator in adobe premiere. so you shouldn't move all the time from beginning to the end, instead, you had an small overview over the whole sequences.
    it would be fin, if that navigator could be in such a small floating window, like the curve-widget or the transform-properties (shortcut N in the 3d-view)

  10. @ROUBAL:

    yes, the sequence-editor isn't an editors-dream... but you should see, that the sequencer was the lost part of blender, before peter wake it up... so wait a short time and it will be full production-usable!

    greetz

  11. Yes,the easy way to import .tga or .png files and they can see lower layer not to add alpha effect.

    Easy to cut /move frames ,option of export file.

    And How can I import .tga file to edit in Gimp. Gimp change my image. Alpha part change to black and color part change to alpha.

  12. I think the blender sequencer should be released and advertised as a standalone application. (even if the 3d features are only simply hidden in the gui.) In my opionion that could bring lots of users to it. you dont recommend a 3d editor to someone who just wants to edit a video. the blender sequencer is one of the best video editors in the free software world (well at least if sound synchronisation would work on linux) - lets learn from mozilla that standalone applications have their advantages.

  13. One thing and one thing only.

    Sound. Allowing to control fade in and fade out of sound. Right now all you can do is cut down the track and reduce total sound level. And fix the mentioned things in wiki page. (chosing one sound system and perfecting it ;))

  14. have a look at Motion from Apple, It's all based in the outbut view and focuses on Realtime art and motion graphics playback. Very smooth to use. 

  15. ****The audio strips in the Sequence Editor need to have an accurate wave sample image. In other words the reference image of the wave file needs to sync up with the audio during playback and scrubbing. This makes it easier to lipsync and time video from a visual prospective. Currently the image that shows up on the audio strip is useless. There is also the need to be able to render video along with the audio for Windows.

    I use Vegas Video as well. I don't expect Blenders sequence editor to ever have all of the features of a professional non linear editing package. I believe the developers need to keep improving it but for the most part keep focus on modeling and animation tools.

  16. Use the sequencer as an interface to create nodes in the compositor. When you add a movie to a strip, what is accually happening is you are adding an image node and setting it to movie then setting its frames to the length of the movie and finally setting its output to a mix node to allow you to add it to the current composition. If you want to add an effect, you can add the specific effect node between the image node and the mix node. You get the idea.

  17. i do some free lance video editing and would be realy interested in a stable linux video editor so i think it is great that blender offers a nle. i also think that when there is a good nle blender can be a good allround motion graphics production solution when there is a good plugin structure for the nodes (to add effects).

    features i would like.

    *preview of first frame (the firs frame shown in the clip bar on the timeline)
    *audio wave forms in timeline (use them a lot when editing conversations an surounding sounds)
    *easy linking and unlinking of sound to a clip on the timeline.
    *crosfading (when you have two clips do fast scalable crosfade between them ==clip1====clip2==) like in final cut pro
    *linking clips to effects for the overview (the effects are a part of the clip bar on the timeline)
    *effects keyframing
    *scale rotation location effects (keyframable)
    *doing a render to disk of imported blender scenes in the timeline.

    *assign [spacebar] to play [c] to cut [f] to crosfade [e] to effect

    *make a video editing panel so you dont have to use the same tools you use in the 3d space for nle stuf (change render to export etc. to make it more accesable for users)

  18. The ability to use nodes for Sequencer operations would be fantastic, especially if data can be grabbed and used from all appropriate internal Blender output (or even other Blender files). Maybe Sequencer output could even be fed back into Compositor/Texture node networks?? The sequencer will soon develop into a very powerful and important feature, in my opinion.

  19. for those who want to disable previewing- I don't know if this is what you want, but you can just turn of the sequencer image preview in visible windows. In the default blender sequencer layout, that window has no header, but if you add one to it, you'll be able to change from image preview to sequencer strip display, or change it from a sequencer to something else.

  20. err,for compositing right? not just for editing (cut to cut)

    -some node edit (vector) for masking would be nice (like aftereffect) so i can put the effects only in the masked area
    -sound mixed with avi output (windows)
    -more output option like swf,3gp,mpeg etc and size variations
    -flexible codec or more codec
    -easy DIY plugin developing, so it can more richer
    -node compositor (the one for map/images) can be applied to NLE.(i really hoping for this one)

  21. * the distinction between HD and RAM for adding strips in the editor isn't very clear, nor is t intuitive that you can load diff. formats with it (linux)- RAM seems to mean "the old way" while HD means "the ffmpeg way"
    * audio ram clips preview the waveform , but audio HD clips do not
    * relative paths works for HD soundclips but not for RAM; RAM soundclips have absolute path soundblocks, even though the strip itself uses a relative path- worse, it fails silently, you don't realize untill you move your files.
    * very easy to inadvertantly create one frame gaps between abutted clips- several remedies possible here-( a change in background color of a frame if it is "empty" vs "has content"- gaps would show up as darker lines, or , suggested by matt, when you push one strip into another and it displays red, it snaps next to it with no gap instead of jumping up to another track)
    * inserting clips is in the middle of a full edit is cumbersome- there needs to be a way to "push" the strips after backwards, without zooming out, selecting, moving, zooming in.. etc.
    * when zoomed out on a small enough clip it becomes very hard to select the whole clip for moving, instead of the end markers.
    * personally I don't like the visual noise from frame previewing in strips; if this is planned, please make it optional.

    these are ideas/suggestions, not demands :) I'm quite happy with the recent developments in the sequence editor, and look forward to whatever the developers have up their sleeves.

  22. ooh, one more:
    *when opening clips, RAM openable clips get lttle colored squares next to them indicating they are openable; HD clips are unmarked, making them harder to find or discover.

  23. Being a Linux user with a limited tool set available I see great use and potential for the sequencer. I currently use cinelerra for my video editing but the blender sequencer is light weight with an opengl interface which makes it usable on lower end machines. Cinelerra recommends ( or at least they did before they went opengl effects ) a dual processor AMD server with 4 GB of ram to run. That said blender sequencer would really change the way I did things.

    I recently was working on a project where I had a video captured at 10 fps that I wanted to sync with an audio clip. The sequencer would not let me sync my clip to my audio. Cinelerra detected the length of my clip based on actual time of the clip where as the sequencer pulled it in based on the frame count. It would be nice to be able to adjust frame rates for individual clips. I hope that makes sense.

  24. @5to11 :
    Quote - they are interested in ‘developing / maintaining’ in the area of their expertise, interest, or where they have good knowledge of the application code, to suggest that they should divert their attention elsewhere is wrong. - Quote

    I hadn't thought about that, and I think that you are right.

    @anders_gud : you'r right too... As I know very few about available softwares under Linux,I didn't thougt there was a lack in the area of NLE for this OS.

    @simhar :

    I think that the strength of Blender NLE is in the fact that it allows very interesting operation that can't been done outside : like compositing while rendering between a 3D scene and some already rendered sequences.

    Also it allows some interesting effects (Glow for example) which are not available in the low cost editor I use.

    If Blender NLE becomes a complete editing program with all functions present in high cost programs, it is great, and I'll applause and say a big thank you ! My only fear is that the developpement of these features could slow down the developpement of the main structure. After reading the post of 5to11, This fear becomes to vanish. But like Dave62, I also have a fear about stability if Blender files grows up...

    But as I'm really hopeless at programming, my fears may be totally unjustified !o)

    Philippe.

  25. I, personally, really like the sequencer. It surprises me how much I use it. There are a few things that would make it significantly faster for me:

    1) Automatic Alpha Over. You import a strip of background images and place them on a layer, then import a strip of foreground images with alpha transparencies and place them on a layer above the background images. Done. No more Add images, add images, convert to premult, right-click, shift-right-click, space, Alpha Over, (c, switch ab if neccessary)... I go through that a lot!

    2) MP3 import. Yeah, I know it's not as high quality as WAVs, but It's what most of my stuff is saved as. This would save me the step of converting my MP3s to WAVs

    3) This one is a bug. I don't know if it belongs here, but I thought I would bring up because it costs me a lot of time. I'll take a meta-strip (with two image strips and a filter) and use it with an image strip and another filter. When I preview or render, the meta-strip will occassionally strobe (random frames get dropped). If I go into the meta-strip and preview it, then go back out and preview or render, the strobe is gone. It would be nice to not do that anymore.

  26. One more thing. The only solid way to create screen capture videos in Linux was to record a vnc session with vncrec. I then can use transcode to convert the .vnc file to an .avi. At that point I can import it into the sequencer. If I could import .xpm image files it would save many many hours of work on each video and it would be faster making my blender tuts. vncrec can dump xpm files very fast and to be able to then import them into the sequencer as a strip would be great!

  27. @Bassam
    Yeah, I know about turning off the preview that way but, it's just a bit cumbersome. Especially, if you have more than one window setup for previewing.

  28. Also, for those concerned about this distracting the developers, remember that there are many developers involved and they have their specialties. This development will be handled by the NLE specialists. So, there's no distraction, just more needed development.

  29. --> does *anyone* use the "reverse frames" feature?

    Anyway, I think that the sequencer has come a long way since I first fiddled with it. look & feel & probably code are better now. The demo at siggraph blew me away, it's really that good.

    I think we should look at it as a good bases for something bigger, and I agree with everything said about programmers doing what they want to in the open source community. It's good peter took this over.

    As for me, I think more control will be good, more help from the tool (when I first tried using it I actually opened the code to understand what is up), many other plug ins , and a better SDK for them - last time I checked it was per-image plugins only.

  30. I know this would require a lot of work, and I've mentioned it berfore... BUT, I still think blender would be better off if it ditched FFMPEG and migrated all video and audio IO to gstreamer. Here's why:

    -It's multiplatform and acts as a nice abstraction layer to system specific APIs
    -It's fast, low latency
    -It takes care of sync and frame accurate seeking issues
    -It still offers access to FFMPEG codecs, as well as others
    -It's less code, smaller footprint and easier to manage
    -It would unify the sequencer and game audio io
    -It's isn't stuck in CVS limbo (it has release versions)

    Second, I like that you can have a http based frame server. This is nice, especially if you need access to it from a remote machine. But not all programs can use it. Could you look into implementing a file node based setup like AviSynth3? (it's still experimental, but is to be multiplatform)

  31. @Kernon: "It would be great if you could hear your audio tracks when using the Mouse Recording feature."
    Really don't know about the "mouse recording" feature. Could you explain a little bit? (Sync / scrub works for dragging the mouse over the timeline and hearing the scrubbed audio...)

    @dave62: "Blender is great, but i think its not good to make it an universal-tool, it will make blender more unstable and hard to use."
    Don't really understand. The sequencer is a NLE integrated into Blender, that works like the rest of Blender, not the other way round. Also, I can't see the stability issues... BTW: I simply like the sequencer for the way, it's interface works...

    @Ron7: Regarding "implicit alpha over": is already in the wiki. But it is good to hear, that intrr is not alone with his ideas... ;-)

    @Jakub: Regarding "Node-based effects for NLE". Would be nice the way brecht wrote a patch. It adds an additional effect strip that fires up the compositor at that place. Ton doesn't like the idea. You can read about it in the patch tracker:
    https://projects.blender.org/tracker/index.php?func=detail&aid=4920&group_id=9&atid=127

    @simhar: "small overview about the whole film aka preview thumbnails on the timeline" As bassam, I don't like this idea very much because of speed reasons. (I personally haven't missed such a feature, but maybe I was in the lucky position of knowing the key positions of the edited movies rather exact in advance...)

    @Kai: "blender sequencer as a standalone application" Well, then it won't be the "Blender Sequencer" right? ;-) Seriously: I like the Blender sequencer in its integrated form a lot, for the following reasons:
    - Titleing can be done directly within the 3D-Scene editor
    - 3D-Transitions can be build directly within the 3D-editor
    - Compositing can be done easily in the same program.
    If you don't like that, maybe Blender isn't your tool of choice for video editing, since I consider that key features, which, when removed would make that thing rather useless to me...

    @Noodlesgc: "open audio files besides .wav" Just use a ffmpeg enabled build, and there you go. Will add ffmpeg input support to Audio (RAM) soon.

    @GKPW: "sound fade control". Is included. Just use the IPO of the sound track and enjoy. Serious mixdown capabilities are in the TODO-list.

    @bestfx: "integrate Blender using Verse is better then complete editor sequence". Do not agree. See my comment to Kai. Besides: Verse is the totally wrong protocol for that.

    @midije: "audio wav preview not accurate enough". Could be worked on. Need to think of really _fast_ solution. (I consider the way cinelerra does it, as slow...) Focusing on other parts: Hey, I like working on the sequencer. Why should I do the things in my spare time, that YOU want me to do?

    @Digital FX Artist: Please take a look at https://projects.blender.org/tracker/index.php?func=detail&aid=4920&group_id=9&atid=127
    and have a discussion with Ton about it. I personally like the idea of "Compositor effects", but I understand Ton's concerns. If you have any idea, that makes everyone happy, then the tracker is maybe the right place to discuss it.

    @gman:
    - preview of first frame:
    Sounds like a doable solution. Is that enough preview for everyone?
    - Audio wave forms. Have to think of a fast way of doing this. Maybe do some background importing / rendering.
    - easy linking of clips: Is in the Wiki. Have to think of a consistent way with the rest of blender.
    - crossfading. Since I do not own Final Cut Pro, please explain a little bit more.
    - linking clips to effects for the overview: Do not really understand. Please explain. (You can see, which effect clips are connected already ...? )
    - effects keyframing. Do not really understand. You noticed "IPO frame locking"?
    - scale rotation location effects: Is called Transform and already in CVS.
    - doing a render to disk of scene strips: If I do the "Bake" effect, that should be already included there. But maybe adding a simple button to the "Scene"-Strips could also do the trick. Will think about it.
    - assign spacebar. Spacebar will always add in the timeline because of interface consistency with the rest of blender. To make it play, simply move your mouse to the preview window.
    - make a video editing panel: Hmm. Most of the render options also apply to the sequence editor. For a consistent interface, don't think that can be done in a nice way. But if you have any concrete ideas, mock up graphics, will look at them.

    @alyx:
    https://projects.blender.org/tracker/index.php?func=detail&aid=4920&group_id=9&atid=127
    and comments above...

    @Bassam:
    - Distinction between HD and RAM isn't "old" and "new". Both are usefull if done right. If you want to load the sample directly into the Blender file: use RAM, if you want to keep sound on disk, use HD.
    - Preview of audio on HD-tracks: Have to think of a fast way to do that.
    - Relative paths not working on Audio-RAM: to put it simple: it can't work, since the samples are packaged with the Blender file. You should use Audio-HD for what you want. What could be done: add a reload button to the soundblocks (and save the original file name somewhere in Blender)
    - Easy frame snapping: sounds usefull. Will add.
    - Inserting clips in between: Hmm. This is really a problem, but don't have easy solution right now.
    - Zoomed out clips not easy to edit: Ouchie, haven't noticed.
    - Don't like "preview thumbnails": Me too ;-)

  32. Fist of all, as Bassam said, these are ideas/suggestions, not demands.
    My english is far to be perfect, so have this in mind: I don't want to bash blender's NLE, but just sharing some ideas:

    I think the efforts should be focused in the upcoming UI redesign of Blender.
    The main weakness, IMO, is the lack of an optimized workflow between Blender and the NLE (and probably with the node compositor soon).
    NLE and Node Compositor should work together. Effects should be moved to nodes and the NLE should only provide editing features (cutting, trimming, crossfades, etc.).
    Assets should be arranged in time using the timeline and later composed together (if needed) using the node compositor.
    The best way to do it would be using the NLE layers as node inputs, not manually but automagically.
    Imagine this: You have renders of a background, a character and foreground effects. You just have to load the three strips to the compositor one above of each other, choose a mixing tipe and when you go to the compositor, you have the basic setup done. Now it's time to animate settings and add extra nodes for special effects.
    It would be fantastic, wouldn't it?

    Well, maybe I'm loosing it :-) , but if that is possible, the editing needs would be completely different and the NLE team could focus in them instead of adding functionality that could conflict with other areas.

    In the editing field, trimming is needed. Professional editing uses this method: You have assets (capture, renders, etc) you trim the raw material into individual clips then arrange them in the timeline.
    This is made with two viewers, one for trimming, other for the timeline. This is usefull for assuring continuity. You see the out point of the timeline and the first of the trimmed clip or vice-versa.
    The isolated clip is dragged to the timeline, or dropped into the insertion point (replacing or displacing the existing clip in case there is one).

    Blender uses a less effective approach: you open the raw material, put it in the timeline and cut it using the Kkey. This is slow and less effective.

    But, if we plan to add this feature, we also need better assets management
    One of the most important things in editing is how the assets are managed and the productivity it provides.
    The file manager or the image browser (or a new library panel) should give the possibility of dragging and dropping media material into the timeline.
    Having clips available would be specially useful for repetitive tasks.

    Other things would be extremely usefull in the NLE:
    - automatic alpha over (better if there's a selector for the mixing tecnique)
    - manual pre-rendering and caching of mixes and overlays for better performance when playing
    - frame skipping on play (optional, for a realtime approximation achieving the selecter frame rate)
    - better audio scrubbing/synchronization
    - Splitting the timeline in two areas (video and audio) would be more "standard" and easy to the eye.
    - Add switches for mute/solo/block tracks.

    Well, as I said befor, this is just my opinion and is intended to help and not to bash.
    Thanks for this chance to express it.

  33. i'm still kind of new to blender/3d, but started a 3d vlog a couple of months ago. (part 2 is finished, i'm just waiting for the music now.) i wanted to use the blender nle for finishing my animation, but just could not do it because:

    the plug-ins/filters i needed either did not work or did not exist (those i found only gave some error message, after looking in forums i tried to use an older blender version that was supposed to work with those filters, but i could not get it to work) - so what would be needed is:

    • more composite modes/basic filters (many are missing, i needed "darken" combined with gaussian blur)

    • it would be great if the nle editor was more intuitive, simpler. i personally like final cut pro, but most nles today work similar: when you add a filter you just drop it on a particular shot/timeline instead of having an extra timeline for the filter/effect/title. it's nicer to work with, easier to learn and just looks cleaner if you have lots of effects.

    i am very impressed with blender and as a film maker i now plan to move more towards 3d/combine more traditional methods with 3d. while i found blender rather hard to learn at first, i soon realised i could trust the app (if something does not work it probably is not a bug but some detail you've missed) and because everything is very well documented.

    still it took me quite a while to understand and get used to the blender key frame editor - this was the greatest source of frustration for me so far. and i did have some experience with keyframe animation in final cut pro. i mention this because:

    • it would be great if the keyframe editor could be more intuitive so that you can easily use filters in the nle editor via keyframes (i'm not sure how/if this can be done at the moment)

    another thing i needed for my last project was:

    • slow-motion/speed manipulation for clips (again i don't know if this can be done, i just did not find a way to do it)

    so all in all maybe:

    • a simpler, modern ui à la final cut pro (or avid etc.), all standard composite modes, standard set of filters (like gaussian blur, sharpen etc.) that work with the latest blender release and a simple, easy to use keyframe editor. and of course a good documentation.

    to the above discussion about the usefulness of a nle in blender:

    for mac os x there is no open-source nle editor that i could find that works on my system (i tried jahshaka, but i looks like it needs os x 10.4 and i only have 10.3.9 at the moment). i'd love to be able to produce/finish a complete animation just with blender - if this can be done without risking to make the app buggy...

  34. hey, that discussion is not about: where doe the developement focus on in the future, it's about the developement of one part, made by one man...!!!

  35. @Reuben:

    Switching from ffmpeg to gstreamer: don't like the idea, see why:

    "-It takes care of sync and frame accurate seeking issues"
    doubt that, looking at the code. I had a lot of twiddling with ffmpeg and I'm still not finished doing timecode based seeks.
    They simply seem to assume, that ffmpeg can seek by itself, which is not true on mpeg files for example. But maybe some gstreamer expert can enlight me.

    "-It still offers access to FFMPEG codecs, as well as others"
    some old version, not updated, when I checked last time.

    Additional problems: maybe I misunderstood the documentation, but I was under the impression, that you have to hand over the event loop to gstreamer to make it work. I doubt, that you will convince Ton (or even me, and I'm a little bit crazy ;-) on this.

    ----------
    To make a long story short: I will add a plugin interface for audio / video to blender. If you want to write an input / output plugin using this interface, that uses gstreamer, I will not step into your way. But I don't think, that gstreamer is the final answer to everything...

    "Second, I like that you can have a http based frame server. This is nice, especially if you need access to it from a remote machine. But not all programs can use it."
    Hmm. That is interesting. Just write a plugin for the other program?

    "Could you look into implementing a file node based setup like AviSynth3? (it’s still experimental, but is to be multiplatform) "
    Don't know, what the sense is here. Let's take a look: within windows, you can overload the DLL-calls to AVI-opening. That is the way, virtualdub does the trick. Within Linux you can... wait there is no AVI-library to overload. So the best thing here is again to simply write a small "glue" daemon, that translates the calls.

    ---
    Don't want to bash here, but I don't think, that it is that easy. If I simply have misunterstood your ideas, please feel free to correct me.

  36. @indiworks:

    "more composite modes/basic filters (many are missing, i needed “darken” combined with gaussian blur)"
    just use blur node and mix node in darken-mode?

    " it would be great if the nle editor was more intuitive, simpler. i personally like final cut pro, but most nles today work similar: when you add a filter you just drop it on a particular shot/timeline instead of having an extra timeline for the filter/effect/title. it’s nicer to work with, easier to learn and just looks cleaner if you have lots of effects."
    I doubt that, since effect strips can combine several tracks to one output track. And that output track should be still controllable. I personally like Blender's interface here very much. What is really that hard on "select track 1, select track 2, press space bar, add effect"? If your timeline gets gluttered you should think about using metastrips to group your work in a sensible way.

    "it would be great if the keyframe editor could be more intuitive so that you can easily use filters in the nle editor via keyframes (i’m not sure how/if this can be done at the moment)"
    You can do that already. (that's what the IPO window in the upper left is for.) If you want to let the IPO act on frames, switch on "IPO frame locking" in the effect properties. Hope, that helps.

    "slow-motion/speed manipulation for clips (again i don’t know if this can be done, i just did not find a way to do it)"
    Good news to you: is in CVS now and very intuitive. You can even sync your video to audio on a per frame basis.

    " a simpler, modern ui à la final cut pro (or avid etc.), all standard composite modes, standard set of filters (like gaussian blur, sharpen etc.) that work with the latest blender release and a simple, easy to use keyframe editor. and of course a good documentation."
    Don't like the interface of Avid (that is really more ported from the Atari-world...), don't know of Final Cut Pro, but if there are interface key elements, that are nice, please add them to the wiki.
    If you ment the IPO-curve editor of Blender by "keyframe editor": if you get accustomed to it, it is really nice.
    And always remember: the interface of the sequence editor is supposed to work like the rest of Blender not like "Name your favourite video editor". We can borrow ideas, but the main idea is to make a consistent and efficient look and feel for Blender users.

  37. - crossfading. Since I do not own Final Cut Pro, please explain a little bit more.

    for example:

    ==clip1=========| |=========clip2=====
    end^ ^start

    now when you rightclick on the point where the enpoint of clip on touche the begin point of clip2 you ge a menu with options and one of them is ad crosfade (or however they call it) when you select that option the picture looks like this:

    ==clip1========={> | effect

    this is a clip with 2 effects:

    ___________
    | clip
    |___________
    =========== >effect
    =========== >effect2

    this is a clip with a effect and a transition effect:

    ___________
    | clip
    |___________
    =========== >effect

  38. Hi im also some of those that like the sequence editor quite a lot, what i miss is more integration between nodes/sequencer, i mean if we have nodes why not use it with the sequencer and vice/versa. I dont know exactly the amount of trouble would be to mantain such code, but i think that it is a resource not being well used.

  39. I'll comment more on this later, but a couple quick things.

    indiwork, check out HyperEngineAV. It is a commercial Mac video editor that became open source recently. I haven't tried it yet since I'm on windows, but it looks like it has many professional features:
    http://www.arboretum.com/products/hyperengine-av/hav_main.html

    The windows state of video editing isn't as good as some assume. For free programs, there is Windows Movie Maker and Avid Free DV. Both are closed source and have some restrictions (WMM only exports to WMV and DV for instance, AvidFreeDV is limited to few tracks, etc).

    I've been looking for some time and have found NO usable open source editors for Windows. Jashaka I check from time to time but has a lot of issues. It seems like the editing portion isn't worked on as much as the other features, and there's been a lot of weirdness with team conflicts. It was not usable the last time I checked it a couple of months ago. Virtualdub is not an editor. You can do some very very basic things (concat two clips together.. cut out a portion of a clip) but it is not made to be an editor. It is basically a compression tool where you clean up your video with some filters and compress it. Avisynth has some great capabilities but it has no GUI and more importantly it can't handle opening many source files at once. If version 3 can fix that, perhaps a GUI can be built on top of that version.

  40. hy sorry for this unorthodox way of posting but when i just posted in reaction to Peter Schlaile i made some ascii explenation of features and blendernation now onely show's half of my mesage so i copied my post it in a textfile and uploadet it to a fileshare lot off banners thing you can find here:
    http://w10.easy-share.com/695529.html

    sorry for this i think my ascii graphics are filtered out by blendernation becouse of the sql insert securety or something like that.

  41. I'd like to throw it out there that I've been using Sony Vegas exclusively as of this year because of it's awesome crossfading method.

    it's also the only NLA that has real native 5.1 surround sound.

  42. @gman:

    if I understood you correctly this time: crossfading will somehow close the gap between to clips. May I ask, what video information should be used in between? Just the still begin or end frame?
    My solution to this would be more like: just let people drag the two clips into each other. That will simply cross fade. If you want to do advanced fading, you will have to add a seperate effect track as before. Does this also makes you happy? (Is even easier then the final cut solution...)

    Close gap doesn't really work right now. Has to be fixed.

    "linking clips to effects for the overview": I'm not very convinced after the explaination. I add effects to more then one input track all the time. And I don't want to get confused. You seem to only stack effects on one track all the time. Correct me if I'm wrong or show me, what you want to do any better in the case of 2 input tracks to an effect.

  43. @Gez : You wrote :

    - Splitting the timeline in two areas (video and audio) would be more “standard” and easy to the eye.

    Well, having used this kind of soft for a time before switching to Blender and magix, I think that it is the less friendly system. It is used in some simple editors like those provided when you purchase a camcorder, or Windows Movie Maker.

    The main problem with this way of displaying the tracks is that synchronizing Video and audio files is more difficult. For example when you want to add a shotgun sound just when the muzzle flash appears, it is easier to have the sound strip just under the image strip. And if you want later to edit or change a sound file through the dozens of strips, it will be much easier to find.

    Having the ability of placing any kind of media exactly where you want is really handy. For user who like the other way, it is easy to decide to use thracks 1 to 5 for video, and 6 and over to audio, for example. There is no need to change the NLE structure for this, because the current one allows both modes at the own convenience of the user.

  44. @simhar: When I said to focus in the future UI redesing I rather meant to have it in mind. Maybe this is a good opportunity to do it and focus in real editing functionality instead of things that could change in the next months and freeze aspects that could be work when the interface is reworked.

    ------

    @Roubal:
    I understand your point, but:
    -That division is the standard in almost every single NLE in the market
    -That structure has a purpose: Audio and video tracks are linked and have common track locations: When you drag an AV clip into track V1, its audio goes to A1, and so.
    -I'm not sure about the synchro problems with that structure: if you have a muzzle flash in V1, you move the play marker to that point and insert the audio clip of the "bang!" there in A1. If you can see the waveform and the playback bar, you can synchronize everything.
    Anyway, I do not hate the current layout (I think it's more flexible), but I think the other structure is more effective in production work because you can have better track of the assets and their locations.
    -The division between audio and video allows to have specific switches in a track header (for example: a gain slider in audio, an opacity slider in video)

    ------

    @Peter Schlaile:
    The crossfade system gman described works together with trimming in the sense that it uses the preroll and postroll material previous to the trimming of the clip.
    If the clip has preroll or postroll information, then it's used to the mix, if there is no extra roll, the first frame freezed is used (generally with a warning of insufficient media).
    The mixing takes place in the same track/layer. You put both clips in contact (outpoint of the first by the inpoint of the second) and you drag the efect over the contact point.
    The complete duration of the transition is over that part, in halves by default (but you can change the both sides duration independently).
    Cinelerra uses almost the same method, you can check it there.

    Another idea for a feature:
    -timeline markers (allow to put flags to identify easily and faster important points in the timeline).

  45. @Roubal: Thinking better, that switches I say could be easily IPO channels. Maybe that's not that critic, but I still don't know... I find the other structure a little more solid (IMO, sometimes too much flexibility sacrifices order or, at least, puts it in risk).

  46. As one of the more prolific seq plugin coders, there are several sugestions I would like to make:

    1) give all float aware (version 4) plugins float input buffers. Currently, a ver4 plugin can get either float or char input depending on the nature of the input strips. This means that you have to either duplicate your code for both char and float operation or convert char input to float for your effect and the convert back to char for output. This not only adds alot of extra code to each plugin but if you stack a few plugins it wastes alot of cycles just converting back and forth for each plugin.

    2) We need more of blenders buttons available to plugins. Currently there is only LABEL, NUM, NUMSLI, or TOG buttons available to plugins. I figured out how to add a TOG3 button and have used it alot in several plugins but I was not able to to get any of blenders other buttons to work. I did manage to use a TXT button in framestamp but recently found out that the way I did it, it would only work if the TXT button was the last button in varstr[] (the text string returned overwrites any cast variables below it-never noticed before because it was the last button I added to framestamp). MENU, ROW, and working TXT button would come in very handy for plugin coders.

    3) It would be nice to have a more flexable layout for the plugins. Right now, all of the buttons are just put into columns of 6. The only way for a coder to change the layout of a plugin is to add a label button here or there to organize the columns.

    4) I think it would be more consistant(and useful) if a LMB drag in a seq preview window would show the current pixel colors like the render window and the uv/image window do instead of scrubbing.

  47. @simhar: When I said to focus in the future UI redesing I rather meant to have it in mind. Maybe this is a good opportunity to do it and focus in real editing functionality instead of things that could change in the next months and freeze aspects that could be work when the interface is reworked.

    @Peter Schlaile:
    The crossfade system gman described works together with trimming in the sense that it uses the preroll and postroll material previous to the trimming of the clip.
    If the clip has preroll or postroll information, then it's used to the mix, if there is no extra roll, the first frame freezed is used (generally with a warning of insufficient media).
    The mixing takes place in the same track/layer. You put both clips in contact (outpoint of the first by the inpoint of the second) and you drag the efect over the contact point.
    The complete duration of the transition is over that part, in halves by default (but you can change the both sides duration independently).
    Cinelerra uses almost the same method, you can check it there.

    @Roubal:
    I understand your point, but:
    -That division is the standard in almost every single NLE in the market
    -That structure has a purpose: Audio and video tracks are linked and have common track locations: When you drag an AV clip into track V1, its audio goes to A1, and so.
    -I'm not sure about the synchro problems with that structure: if you have a muzzle flash in V1, you move the play marker to that point and insert the audio clip of the "bang!" there in A1. If you can see the waveform and the playback bar, you can synchronize everything.
    Anyway, I do not hate the current layout (I think it's more flexible), but I think the other structure is more effective in production work because you can have better track of the assets and their locations.
    -The division between audio and video allows to have specific switches in a track header (for example: a gain slider in audio, an opacity slider in video)

  48. Gez is right about what i meant with the crosfades. this works realy fast especialy when you combine it with the close gap tool. it allows you to edit a un edited camera shot and just cut out all the crap close the gaps and tweak the cuts and add some crossfades where needed.

    and i agrea with Gez, timeline markers would be great to! if you could snap clips to the markers. and when you assign a key to seting markers, you can "tap" the key to the beat of the music you'r editing on and then just snap the clips to the markers.

    the crosfading can also be done in a other way, i have used it on a hardware based editing suit last year but i'me not shure what it was caled.
    in that system the firs 2 tracks would always crosfade all the overlaping material, so you could easely drop all your clips on the first track and when you neadit a crosfade instead of a hardcut you would just drop that clip to the second track and overlap it and the overlap would crosfade.

    this workes realy fast if you had to edit some material down from like 1h to 20 min. And you could do all the more complicated editing on the other track without having to fear tha something was overlaping..

    it would also be good to have a inpoint outpoint meganism on the timeline. where you can watch a whole track and define inpoints and outpionts on the track while watching, and when you'r done you will only have the clips you wanted on the timeline and not the ones you dont want whitch would be still on the timeline if you where just cutting the clips.

    @Peter Schlaile
    “linking clips to effects for the overview”
    i understand that it is a good feature to be ableto add a effect to more clips
    but what i meant is that it is not always easy to see whitch effects are linked to whitch clips.

    but maybe this would be no issue in the future when you can assign the effects with the nodes sytem :) if that is possible...

  49. Motion tracked data--- from video and use it not just for placement of objects in Blender but for creating mattes in the compositor(using bezier curves)

    Image editor intergration--- I would love to use motion data for video paint in the image editor. X and Y telling where to place the raster paint or cloned data example:(track wires and remove them). And pipe the layer back to the sequence editor.

    Stablization of video from motion data-- maybe just sending the data to the compositing node translator.

    I realize that this isn't just the sequencer but I would love to help make this happen.

  50. @ shawn fumo: thanks for the link, i'll have a look at it. (but it looks like there is one important limitation: it is only dv. this is of course not really an option for 3d animation...)

    @ gman: dissolves/fades, effects etc. in final cut pro are little icons you drag and drop either on a clip (for an effect) or over a cut (for a dissolve). when you doubleclick a clip or a dissolves you can access what would be a properties window in blender (in fcp the player/viewer window changes to this when you doubleclick a clip/dissolve in the timeline). here a general screenshot: http://en.wikipedia.org/wiki/Image:Fcp5_screen.jpg (http://en.wikipedia.org/wiki/Final_Cut_Pro)

    @ peter schlaile: thanks for the detailed tips, i'll try and see if i can get what i need this way. about the nle (and blender's) interface in general: from what i have read (and again i am still kind of new to blender) i think there are discussions about how-to make blender more accessible for more people. final cut pro (that "borrowed" a lot of its interface from avid) and most other nles i've seen (exceptions might be very expensive high-end ones) all tend to have this drag and drop effects onto the timeline/clip solution. the way blender does it (extra strip/timeline for an effect) is more the way this was done 10 years ago. maybe i have not used it long enough, but to me this seems not very intuitive and tends to clutter the timeline (or might require an extra step to bundle it all in a new strip). i'm sure i can learn how-to work with it, but if blender wants to gain a larger userbase than simplifying the interface seems to me very important. i agree that the keyframe editor does work well once you figured it out, but not every creative person is also a technical one (many are just the opposite and get pretty scared when i show them blender). of course in the end ui discussions are also about taste and this is of course something very personal...

  51. It's me again :-)

    Two things:

    Wouldn't be better to have the transform properties as standard ipo channels for the strips?
    I don't really feel comfortable with the current management of IPOs with video. I think it would be much better if a strip has this own channels (as the 3d objects have):
    -position x
    -position y
    -rotation
    -scale x
    -scale y
    -opacity

    The same could be applied to the transitions: The current transition method is A/B Roll, where you have 2 clips and the transition linking them. The completion value of the transition could be an IPO channel, so the direction and other variables.

    @Kernon:
    I didn't know it! It's great and it works during playback. I didn't find it before because I was searching in the NLE window, not the timeline. Thanks!

  52. I also don't like the way that the seq editor automatically rescales an input strip to the render size. It would be nice if there was an option under strip properties to import a strip at it's origional size - it would certanly make the tranform effect much more effective

  53. I think that the major feature would be some kind of connection between the NLE and the node editor.

    Add an outliner where you can easly organice your stuff

    some kind of automatic adjust of the project lenght to the NLE lenght. ie pressing the Do Sequence button, or even better an "in and out" system like in others NLE packages.

    Snap function in the frame reader line. in AVID when you press the Crtl button and LMB at the same time the line that marks the current frane snaps to the nearest cut.

    I can't see interlaced video correctly i don't know why. It makes strange artifacts

  54. The ability to use realtime opengl effects! Then we could all make our own effects in the 3d view window and these effects would be realtime.
    Some examples that wouldn't need rendering ...
    Bluescreen like in the game engine video plugin (you could even use the code from it!)
    alpha fade ( use multiple materials on a plane and change alpha value)
    video on spinning 3d cube while another video playing in background
    shatter effect
    shrinking circle
    blinds
    rotations
    other effects like in the free windows video editor "Wax" - which you can even import 3d objects into effects.

  55. Kai Schröder on

    Peter: Talking about a standalone version of Blender Sequencer, I do not want to remove it from Blender itself. I really like the integration as it is. But there is a lack of free video editors. And I just think by releasing and advertising a standalone application, a lot of new users could be reached without too much efforts. And more users often means more progress in the open source world.

    Just my 2 Cents
    Kai

  56. well if you're adding realtime effects bases on elements from within the 3d view i would start with a good "stroke path with brush" function. this is realy usable in many motion graphics projects (think about apha masking combined with strokes resulting in "growing" elements etc.)

  57. @Gez:
    "GUI rework": interesting, never happens.
    "Node Compositor / Sequencer": please read my other posts on this topic.
    "asset management": already in the wiki, should work like material
    "Blender uses a less effective approach": nope. Is easy. Asset management should be added in a non-intrusive way. Don't like assets myself, but I agree, that they are helpful for certain project types. In other projects, they simply stand in the way between me and my timeline ;-)
    "automatic alpha over (better if there’s a selector for the mixing tecnique)"
    Already in the Wiki.
    "manual pre-rendering and caching of mixes and overlays for better performance when playing"
    Ahhh missed that. Is on my mental TODO... ;-)
    "frame skipping on play (optional, for a realtime approximation achieving the selecter frame rate)"
    ? Already works for me...?
    "better audio scrubbing/synchronization"
    ? Already works for me...?
    "Splitting the timeline in two areas (video and audio) would be more “standard” and easy to the eye."
    Don't like it. As others noted, you are free to arrange your clips yourself that way.
    "Add switches for mute/solo/block tracks."
    Usefull and in the wiki, I think.

    @gman:
    "Crossfade system":
    OK for you, if we do the following: Implicitly cross fade, if we drag clips into each other. Add hotkey, to accomplish this inside a gap by moving strip extends accordingly?

    @paprmh:
    "1) Make everything float": Have to benchmark this on current machines. But I think, if the basic effects are recoded with MMX byte-operations are still faster. So I don't want to force people to use float-buffers, if they don't want to. It is indead somewhat ugly to always code both versions. Have thought on this myself, since the rest of Blender works completely on floats only... Still open for discussion, what do others think?

    "2) + 3) More buttons, flexible layout for plugins": Agree, but not sure of the implementation. Maybe use some text / XML representation of the Ghost GUI elements? Would make it possible to make the whole interface of Blender customizable some day. What do others think?

    "4) LMB drag in preview window": Think you are right, should be fixed.

    @gman:
    "- Timeline markers"
    are already there. Use the timeline window... (seems to be, that they are even enhanced in recent CVS)

    "but what i meant is that it is not always easy to see whitch effects are linked to whitch clips."
    Maybe it would be a lot easier to make the text inside more descriptive? (Currently, we only show: works on track 4 to track 5 using track 7, we could also just include the strip names here. Should be a lot better...)

    "Again node system"
    I begin to think, that there is a pressure group here on this topic. Maybe we should add a petition page, so that people can sign for node effect strips ;-)

    @Gez:
    "Adding rotation properties etc. to every track."
    Not very clear in design, IMHO. Just check out current CVS. The transform effect should work for most people. If you are starting doing more creative 3D-effect stuff, you should probably use the 3D-scene editor for this...
    (Before seeing the transform-effect in the patch tracker, I had the same idea, but right now, I'm very satisfied with it... ;-)
    That said, I will indeed most probably add some basic support (posx, posy, scale x/y) for movie strip import. Otherwise it is impossible to import movie strips in correct aspect ratio or do letterboxing correctly.

    @paprmh:
    "automatic rescaling on input":
    Is a wiki entry. Have discussed it with intrr there. If nobody complains, I will most likely add it, the way I described it there. It will even add basic transform properties to movie strips, which some people wanted here anyways... ;-)

    @dazzler:
    "a lot of realtime opengl effects":
    Yiieks. Wanted Blender to be a professional package, sorry. If you really need 3D-transitions (which I personally doubt), you can always use the 3D-scene editor for this.
    Accelerating the available effects using OpenGL is on the TODO of Brecht. (He started on this with the node editor but sequencer effects are basically the same.)

    @gman:
    "well if you’re adding realtime effects bases on elements from within the 3d view i would start with a good “stroke path with brush” function. this is realy usable in many motion graphics projects (think about apha masking combined with strokes resulting in “growing” elements etc.)"
    Since this is rather obscure: Why not just use the 3D-editor for creating the animation? This is BLENDER!

    @CKPW:
    "Thanks for the info. But something more intuitive. like a simple spline control in the sequencer. Would speed it up 10 fold."
    Don't understand. Could you explain?

    @AkhIL:
    "add jack support". Interesting. Intrr thought more on integrating "Midi Sync" (to accomplish basically the same). What do others think?

  58. It would be nice if there was tighter integration into the Compositor, because the compositor is in a way a video editor in it's own right, but it's pretty hard to use now to edit video.

  59. High end Avid systems have a feature that allow them to store video in a database like structure. I have done several 15/20 minute videos in the past. the most recent one had over 10 hours of footage and ~400 clips. Searching for those files is a pain. It would be great if it could be setup to be able to search for metadata in those clips. Sqlite+Verse?

    Verse support would be nice. Multiple people working on one project.

  60. @Peter Schlaile
    "
    “well if you’re adding realtime effects bases on elements from within the 3d view i would start with a good “stroke path with brush” function. this is realy usable in many motion graphics projects (think about apha masking combined with strokes resulting in “growing” elements etc.)”
    Since this is rather obscure: Why not just use the 3D-editor for creating the animation? This is BLENDER!
    "

    these "Motion graphics" functions i desctibe are verry hard to make in the 3d window. you can do some stroking with the particles but this is limited and not realy intuitive. I dont think the nle is the right place to add sutch functionality tough.

    Maybe a motion graphics screen layout with a bunch of speciffic motion graphics functions like stroking vector paths. Easily transforming vector paths with keyframes without having to use the shape keys (slows me down a lot!) and importing bitmap graphics (or even movie clips) with an alpha chanel right on the canvas without having to map pictures to a plain. ofcourse with a good previeuwing in the 3d space (also some previeuwing of the aplied effects to the ellemnts).

    that would be fun :)

  61. Something else I thought of...

    In the compositor give all image nodes a time input. This will allow you to remap the time of the image node, and any other node for that matter, to speed things up, slow things down, stop them or reverse them.

    One more thing is adding an IPO node. This will give you to access to the IPO editor. Each node would have access to one curve in the IPO editor.

  62. I know my previous posting doesn't seem to have anything to do with the current purpose of this page, but on another website people mentioned that when you hit Alt+A nothing happens in the compositor and there was no IPO access, this would be one way to fixing this problem.

  63. One more thing and that is it for tonight...

    If you have any questions about how something will work if, hopefully when, the sequencer and the compositor are combined, please ask me, I have given this alot of thought. This is not to say I know everything, but I can and want to help to get these combined. Since the compositor is completely integrated into blender, I think this would be the best for the future of Blender and what it can be capable of.

    “stroke path with brush” I have thought of this before...

    What if you combine some of the current capabilities in the 3D view with the ability to track the mouse. You would start by clicking on the paint command or button when you get into the 3D window you would either click and hold on the LMB or RMB and draw. As you draw blender is recording mouse positions. When you let up on the mouse button, blender would produce a path, create a plane, move the plane to the beginning of the path, add an array modifier to the plane and have the duplicates follow the path in position and orientation. Then the command would create a new material. It would then create a texture with that look to it or brush shape to it, then apply it to the plane and all of the duplicates. This is something similar to what Photoshop does, only difference is this is in 3D and completely editable after you create it, actually it is something more like After Effects.

    So what do you guys thing of these ideas, good, bad or ugly???

    OK, this is it for tonight, NO MORE ADDING FOR ME, I will get back to you later.

  64. @Peter:

    maybe the "preview pictures in the timeline" shouldn't be for every frame, but one (maybe the first) picture on a movie-strip, to identify it easy...

  65. this a good one It would be great if you could load pictures or movies directly from the image browser to the timeline, this way you can manage all your files directly from the file (image) manager without doing anything else, including a preview picture

  66. I don't think adding effects, database for clips, masking and a couple of things asked are good ideas for the sequence editor.
    The sequence editor (as well as the compositor) is intended to add editing functionality to Blender.
    Blender isn't a video editing software. Is a 3D package.
    Editing features are useful for puting the scenes together, but effects, plugins and stuff like that aren't essential, IMO. That should be achieved using the compositor (and that's why I think the sequence editor should forward its tracks to the compositor)
    Some people seem to be asking for an Adobe Premiere with Hollywood FX replacement, for edditing wedding videos, and that's not the point :-p

  67. Gez,

    [QUOTE]Blender isn’t a video editing software. Is a 3D package.[/QUOTE]

    It doesn't have to be one or the other :) Many 3D packages are moving towards integrating all aspects - see Houdini for instance.

    [QUOTE]Some people seem to be asking for an Adobe Premiere with Hollywood FX replacement, for edditing wedding videos, and that’s not the point :-p[/QUOTE]

    I'd rather a Shake + FCP replacement - most people aren't looking for a low end NLE but a serious NLE and Compositor suitable for professional work. Just because you have no need or interest for such tools doesn't mean that Blender shouldn't improve to meet those needs as well.

    LetterRip

  68. Tom: I'm a graphic designer, I do motion graphics frequently and of course I'd really like to have tools for that in Blender and get rid of the other expensive and gigantic tools.
    You say many packages are moving toward integrating, but that carries huge costs:
    Less stability, heavier use of system resources, interface problems, etc.
    I think the do-it-all approach is not the best. I've never seen a program of that kind working better than standalone. Everything gets more complicated and cluttered that way.
    I'm not sure if i'd like to see blender become a multipurpose app.
    I like it as a 3D package.
    Of course, 3D works need editing and compositing features, and it's great to have them available.
    But if I want to edit a video that has nothing to do with 3D, I go for a video editor.
    If I want to grab a PSD with 50 2D layers, I want a program with a workflow designed for doing that fast and easy.
    I'm not saying that the same thing can't be achieved with Blender and its excellent workspace management. The interface is flexible enough to stand that approach.
    I'm concerned about the general focus of the program.
    Even though there are lots of programs integrating different features, they don't replace the standalones. You said it: you use Shake AND FCP. Shake is a compositor, FCP is a video editor.
    The same is with After Effects and Premiere Pro. You can edit a video with After Effects, but Premiere pro has a much more optimized workflow for editing.
    Premiere Pro has keying and transform filters, but if I want to compose a couple of layers I'd rather use After Effects.

    Anyway, if Blender takes that path (to become a complete compositor and editor for specific uses) its interface should go through a very carefull redesign so it can fit well with all the possible workflows.

  69. I would love to see some of Blenders weaker areas strengthened. Thats what this is about.
    I have Premiere Pro with a 1000 real time editing card which I hope not to use... :) I want to use Blender its all there. I hate pulling a program from one area to another it gets so messy.
    I think that Ton is working on the the image editor or is it the browser?

    Are these compositing tasks or video???
    I would love motion tracking for video help to intelligently help with all the things we want to with blender.
    So I am asking for is a path in for motion tracking and the sequencer handles the video so that is where I think it should be. Then it could be exported for autoroto scoping in the image editor. Or stablization.

    Also a way for things that you could do in the image editor(Painting (frame by frame)) to be put into a sequence automagicly. Is this a composting task? Its a little unclear. The image editor is lacking in the video dept.

  70. i almost forgot! I would realy like a better intergration in the menu structure of the external effect plugins. (like cattegories)

    @Gez
    i dont think more features have make a program less stable or harder to use. i used blender for modeling rendering for about a yea when i started 3d and NEVER got confused by the animation options that where available why should a user gt confused with something he is not using?

    i dont now about the stability and the system resources, i'me not a programmer. but i think that blender does not use anny cpu-gpu for functions you are not using i think it uses a tinybit of ram (to small to be realy noticed) and some harddisk space (deffenetly to small to be noticed). I dont think it will compromise the blender taking over the 3d world plans at all.

    @Gez again.

    "
    Blender isn’t a video editing software. Is a 3D package.
    Editing features are useful for puting the scenes together, but effects, plugins and stuff like that aren’t essential, IMO. That should be achieved using the compositor (and that’s why I think the sequence editor should forward its tracks to the compositor)
    Some people seem to be asking for an Adobe Premiere with Hollywood FX replacement, for edditing wedding videos, and that’s not the point :-p
    "

    why not? why would you focus blender on only 3d, why not make it "the open source graphics solution" .you might not use it. but i will use it and with me a lot of people who have to dual boot their machines with windows to do some decent video editing. i think there are a lot of users out there who would like a real multy platform video editing solution. and dont forget the blender way of doing things (fast workflow, shortcuts) apeals to a lot of pro users so this migh be a boost to blender maturety in the opinion of the pro market.

    and i dont know if someone will be able to add all these features but if someone does why not?

  71. gman: I do video edition, motion graphics and compositing. Really.
    I'm a graphic designer. I work with print and video. Of course I'd love to have applications for that in my Linux and much better if they're free software (as in speech and as in beer).
    Don't get me wrong: I use that features indeed. I'm not saying "I don't use them, so they haven't to be there". What I'm trying to say is something completely different.
    You say: "I never got confused by the animation options"... It's animation! It's an essential part of a 3D package!!!
    Editing and compositing are complementary tasks. If a 3D package haven't them you always can render the 3D material with alpha channel and take it to other applications.
    In fact, is the most common practice in the professional world (when compositing materials from different sources).

    If we want Blender to be a video editor and a compositor... Why not to make it also a vectorial illustrator, a raster editor, a print manager, a font manager, a spreadsheet... :-)
    It's pointless. If you loose focus, you loose specificity and quality. Blender born as a 3D package. It's ok to add video and compositing features to extend its 3D capabilities, but changing its focus and turn it in a completely different application is, IMO, a big mistake.

  72. Peter Schlaile on

    @Gez:
    "Why not to make it also a vectorial illustrator, a raster editor, a print manager, a font manager, a spreadsheet… :-)"
    You know DTPBlender ;-) ?

    Seriously: I think compositor and video editor fit very well. (Especially if I see NLEs that try hard to integrate 3D-effects and compositing... They will always fail, since they only include hard coded "wedding movie" effects, that can't be used for anything serious. And that is the point, where Blender as a complete editing / compositing / 3D suite makes _really_ sense...)

    Also: I find myself using Blender for 2D-stuff more and more often. The 2D-functionality isn't that bad. Not as good as Adobe Illustrator or Corel Draw, but basic work can be done well. (and that is enough for making good clean titles for video - even animated, which Illustrator or Draw certainly can't do... !)

    "Editing and compositing are complementary tasks. If a 3D package haven’t them you always can render the 3D material with alpha channel and take it to other applications."

    That is not completely true, when you take a look at Ton's vector blur node in the compositor. This can't be done afterwards... (Name me an image format, that contains also velocity vector informations...) The compositor is entirely part of the render pipeline and can't be taken out.

    I think we should simply stop the discussion: "Should we have compositing and sequencing in Blender or not?".
    Both features are in Blender and as I can see, are usefull and won't be removed.
    Since they are in, we try to make them as good as possible and make them fit as good as possible into Blender.

    (No bashing intended. I appreciated your comments very much!)

  73. @Peter Schlaile:

    "Don’t want to bash here..."
    You're not bashing. You know a lot more of the complexities involved here, so it helps when you explain why some ideas won't work.

    "They simply seem to assume, that ffmpeg can seek by itself, which is not true on mpeg files for example. But maybe some gstreamer expert can enlight me."

    I don't think they use ffmpeg for seeking or sync. I believe the Gnonlin plugin handles that type of thing. (Gnonlin is a gstreamer interface to provided functionality needed by video/audio editors)

    "maybe I misunderstood the documentation, but I was under the impression, that you have to hand over the event loop to gstreamer to make it work."

    I've been told that's not the case: "It runs it's own threads. The only mainloop integration is via the GstBus, to receive messages from the pipeline, but that doesn't have to use a mainloop either"

  74. I agree with Peter... Not only is compositing in blender. Check out the new stuff in CVS. Blender has now already become a great compositor. There are a ton of new compositing features in the CVS. As well as a great patch to do garbage(bezier) mattes.

    To me it is a process of getting whats there working together or allowing it not to have road blocks. Not major structure changes... But allow pipes to share info between each other. I agree that too much change can kill a package. For me Adobe did this. Premiere fell behind by trying to integrate too much... It is a good lesson. But I dont feel blender is in danger of this.
    Rant warning...
    Open source is great at developing in 12 directions at once. As long as those who are managing the releases keep their goals clear which they seem to.
    Open source is great that little facets that some people would deem unimportant get developed and then become invaluable.

  75. One potential issue with gstreamer is the quality and usefulness of the OS X and Windows ports.

    Another potential issue is how does it compare regarding binary size for Blender (it depends on glib, so glib and the gstreamer binary sizes need to be considered).

    LetterRip

  76. "One potential issue with gstreamer is the quality and usefulness of the OS X and Windows ports."

    That would have to be looked into. I only use linux, so I have no idea of how well the ports work and what issues they may present.

    "Another potential issue is how does it compare regarding binary size for Blender (it depends on glib, so glib and the gstreamer binary sizes need to be considered)."

    I think the glib dependency is only if you are building the gst-editor. It's a separate module and not part of the core. I don't see why glib would be needed elsewhere. If you only packaged up the core, and useful plugin modules, I don't think it would add much to the size.

  77. @Peter: Maybe my poor english doesn't let me explain myself correctly.
    I've never said that compsiting and sequencing should be removed from Blender (check my first comments).
    I'm trying to say that the features added for these areas should be usefull for 3D work.
    I'm discussing about adding features that are not related to 3D work, that's why I mentioned wedding videos.
    If you're going to edit a 2 hrs. wedding video and you want to put a 3D golden ring flying, do you run Blender, make the ring and then edit 4 hrs of raw material in the sequence editor using the Kkey?
    That's nuts. Why would you use a program designed for other things when you have specific programs for that?
    If I have to do something like that, I edit the video with Cinelerra, render the ring as a TGA sequence (using vector blur, it produces a perfect alpha channel right out of the compositor) and put video and CG together in Cinelerra.
    A very different thing is editing a short scene with lots of vfx using greenscreen and 3D backgrounds. Mixing 3D with live footage, or layering animated material. In that case, of course I'd use Blender.

    Don't get me wrong: I'm only asking to add features that are relative to the 3D work.

    Finally, you mentioned DTP Blender. I'm a professional designer. DTP Blender is ok for a student flyer, but seriously, it lacks the main features needed in production work. It's a nice experiment, anyway.
    You say you're doing more and more 2D work in Blender... I know it can be made... but isn't it easier and faster to do it with inkscape and then import into blender for animation? Even making that, you have to assign materials to each path, one by one.
    If you have to do some 2D illustrator work, Blender isn't the best choice. That's what I'm talking about!

  78. Oh, and Peter... I don't feel that you and the others that think different than me are bashing me.
    It's not an obligation to think as i do. ;-)

    And my last comment about this: read your own answer to the "DV Grab" feature request. That would be my answer too. That's my point with the other requests.

  79. @Gez
    I understand that you don't want blender to be a "everything in one package" that seems logical to me. You dont want to be botherd with functionality that is not interesting to you.

    I (and i gues many others to) use 3d is very intergrated with motion graphics and video editing.
    I am not saying that these are the things every 3d program should be able to do, but in my oppinion the best 3d sofware would be a package whitch allows it users to do everything they want to create the best endresults. And blender is in manny way's the best solution and one of the main reasons is it has a lot of the things i need in my motion graphics pipeline.
    So why keep these parts of the program basic? they are very valluable for a lot of users and if there are people interested in developing these functions why not?

    i get your poin that is has no use to include spreadsheets, but i just dont get what is wrong with pro like video editing functionality?

    (i hope i am not boring you :) but i realy think video editing is a essential part of many 3d pipelines and good dv video editing (on material similar to wedding video material) is a great feature in a 3d program so you can easaly do special effects and motiongraphics in the same environment where you do the editing.)

  80. Adding pro editing functionality to Blender is not wrong. Turning Blender into a video editor is a very different thing.
    Edition has a different workflow than modelling, animation and compositing. You need different tools and different media management.
    When you edit video, you have raw material which you have to trim and organize in the timeline. Non-Linear Edition is that. Having everything available and putting it together.
    Blender's current workflow doesn't allow that, because it hasn't assets management.
    Having assets in Blender would imply a very radical restructuration in the way that Blender manages all the media files in the different modules. It would have impact in the whole package, not in the sequence editor. It would be a completely new direction, and I guess it would require a complete and deep analisys. Is a big change!

    Putting together rendered clips is not video edition. Is "sequence editing", is montage. The Blender sequence editor is perfect for that, and with some improvements (I assume that's Peter's idea right now) it will do the work perfectly.
    But video edition (i.e. a wedding video, a filmed movie, a TV program) is something completely different.
    If you want to understand my point, grab an hour of DV footage, capture it and try to edit it with Blender. Try to do the same with Cinelerra, Premiere Pro, FCP or Avid and watch the differences.
    Replacing one of those specific programs would require to transform Blender completely. It won't be don just adding effects and a few features.

  81. Peter Schlaile on

    @Gez: Hmm, to be honest, I haven't used the Sequencer ever for serious 3D-work. In fact, I needed a stable NLE to edit our companies celebration video, which was 3 camera live footage 2 hours long... ;-)

    Since I haven't managed to get anything usefull out of the other OSS solutions (Cinelerra, Kino, etc.), I tried the Blender Sequencer and was surprised, that it reacted the most like I expected it.

    It is rather my relatively conservative view on what a NLE should do and what it shouldn't do, that made my code usefull for others.

    The nice thing about Blender is, that it "thinks" in an orthogonal way. Other programs try to build single purpose plugins (most probably to make advertisements, that they included another zillion effects in the next version, that I have to buy), instead of making a basic tool set, that can be used in a general way.

    Why should I try to include _any_ 3D-transition effect into the sequencer? The single purpose effects are basically useless, and if I need something special I'm doomed anyways with a commercial NLE. In Blender I simply add a scene, create my effect and throw it onto the timeline.

    Why should I try to add _any_ compositing functionality except the most basic one? There is a compositor, only one screen away included, ready to help.

    In that way, I think by simply focusing on making a very straight lean and mean NLE, that integrates well with the rest of Blender. we could satisfy both worlds, the 3D-artists _and_ the wedding video guy. (Assumed, that the later had time to learn the interface...)

    In fact, I wanted to add something soon that especially should make our live video freaks very happy: The (optional) ability to seek based on timecode instead of frame number. If you have several cams and want to synchronize later it is a real pain if someone decided to stop his camcorder in the middle of the show several times. The funny thing about that is the fact, that it makes also mpeg-seeking in certain cases more stable (there are mpeg files out there, that have variable frame rates... yiieks!).

    I still think, that we can make both worlds happy, if we don't try to add features "just for the sake of it" and try to stick to a minimal feature set, that works very robust and gracefully.

    To make it short: we both have different reasons for our opinions but most probably will arrive at a very similar destination... ;-)

    P.S.: I simply can't believe, that you would try to edit a wedding video with Cinelerra... ;-) I will go nuts, if I have to use it again...

  82. Peter Schlaile on

    @Gez: Regarding asset management: you would be surprised, that it could make the code of Blender more clean... at the current stage, sequencer strips are something alien in the Blender data management. I have still the (maybe crazy) idea, that adding asset management the Blender way (in a somewhat similar way, material is handled), could make a lot of people _very_ happy.

  83. Peter: I'm glad to know that.
    At last we think the same. What you wrote in the previous comment is exactly what I was trying to say.
    Oh, and about the assets management... I will be one ot those happy guys.

    P.s.: I use Cinelerra and is great. Like Blender, quite weird at first, but when you get used to its weirdness, it's very good. If you try again with Cinelerra remember the trimming process (trim small clips from raw material in the viewer window, and use splice or overwrite to put them into the timeline instead of putting the whole video and cut it directly on the timeline).

  84. First of all: Thank you for your great work on the sequence editor!
    I've been playing with the speed control effect from current CVS and can't quite achieve what I need and start to wonder if it is possible at all: I have an image sequence (25 images) and would like to automagically fit it's speed to the length of the strip in the sequencer so that I can simply grab the end of the strip, adjust it's lengt (say, 378 frames) and the whole image sequence will be played once during that period. I'm constantly changing the strip's length because it is used for experimenting with timing for an animatic, so I'd rather not have to touch the IPO every time I change the strip length. Does that explanation make sense?
    I haven't been able to do that so far - I'm quite sure I'm just missing something, so a little hint would be appreciated very much :-)

  85. Peter Schlaile on

    @mppic: sorry, the easiest way currently is indeed using the IPO in frame locked mode, using two control points (one at start, one at end) and adjusting the second one to your strip length. Reason for this: the speed controller doesn't really know, in which way it's input changes length (it can't know the original length of the input strip!). If this task is very common, one could add a hack to accomplish it though, but it would be a hack, really!

    What do others think? Is this very common?

  86. retiming a strip is quite common. However, it generally is using fun pixel interpolation to generate frames for the proper locations.

    http://sfx.realviz.com/products/rthd/index.php

    It might be interesting to do for other purposes in blender as well - render half your frames and then interpolate the other half. (Which can easily save you 90% plus of your time for the second half of the frames for similar quality results).

    LetterRip

  87. Peter Schlaile on

    @mppic: thought about it again. You are right. Is in CVS now. (Works only for enlarging right now and only for the right handle, not the left one. Will add that later.)

  88. Peter Schlaile on

    @tom: The speed controller does already interpolate if you click on frame blending in the n-key dialog. Looking at realviz.com I think the only thing I'm not doing right now is motion estimation, which certainly could improve the quality. But without baking will be too slow for the sequencer right now. But this is a nice example of an effect that is mainly time oriented and therefore does not really belong into the compositor but more into the sequence editor - without being realtime by definition since we have to do motion estimation, if we take the problem serious...

  89. Peter Schlaile on

    @Reuben: "I don’t think they use ffmpeg for seeking or sync. I believe the Gnonlin plugin handles that type of thing. (Gnonlin is a gstreamer interface to provided functionality needed by video/audio editors)"
    I read the Gnonlin sources. The seeking is handed over to the input plugin by passing up a seek message through the tree. The input plugin then performs the seek operation. This is rather naivly implemented and will certainly not work with mpeg files. Looking at my latest version of the seek code maybe it is possible to work around this. Have to think about it...

    "“maybe I misunderstood the documentation, but I was under the impression, that you have to hand over the event loop to gstreamer to make it work.”
    I’ve been told that’s not the case: “It runs it’s own threads. The only mainloop integration is via the GstBus, to receive messages from the pipeline, but that doesn’t have to use a mainloop either” "

    That sounds interesting. Is there example code out there, that shows that?

    @tom: The binary size of gstreamer is essentially the interface library which is very small. (around 30 kb or so). The problem I had with it was more that I was not sure, if it could be used as a serious ffmpeg replacement because of seeking issues and not up to date ffmpeg version.
    And it should be multiplatform by advertisement. ;-) One has to test, that's sure...

  90. Peter Schlaile on

    @mangojambo: "I want, I mean, I need chroma key !!!"
    You noticed, that there is a compositor included? Latest version even has easy click chroma keyer, but one can also build one with the basic nodes that will even remove arbitrary backgrounds.

  91. Wow, that's amazing! Now the speed control behaves exactly the way I imagined - thanks a bunch, you just made my day!
    Another tiny idea just came to my mind: Would it be hard to integrate frame blending in the transform effect? This would help me to keep a consistent look while I'm zooming/panning/rotating in storyboard stills inbetween speed controlled image sequences. It's not a problem now and certainly not a request, would just be nice and consistent ;-)

  92. Peter Schlaile on

    @mppic: Frame blending is already part of the Speed Controller N-key window. Or why should there be a need for the transform tool to do blending? A little bit confused...

  93. I just want to mention another reason why I think asset management is important even for purely 3d work. Creating just what you need and assembling it in the timeline may be ok for smaller projects and meticulously planned bigger ones, but there can still be problems. For instance, it seems like a lot of time was lost in Elephant's Dream because things were planned a certain way in the storyboard but had to be changed at the end of the process. I think the individual scenes were roughed out and the completed, but when putting the scenes together, pacing issues came out, etc.

    So, perhaps a workflow could be done more similar to filming a live-action movie. When you're still in the animatic stage with very basic models and animation, you could create longer takes and even multiple takes of the same action exploring different ideas. At this point, you can depend on the asset management and the strength of the editing tools to create a relatively solid edit of the whole movie. Then the pro voice acting can be done. Now you can go back and flesh the portions that were actually used.

    So, in this case the power of the editing suite would be more important because it would be tightly integrated into the process of creation instead of just assembling finished pieces of animation at the very end. In this kind of situation, interesting integration options could possibly be done. Like if you select a clip in the assets or timeline and do a special option, it could open up the scene in blender, marking the ranges of frames that are actually being used in the final edit.

    So, I think a powerful editing section (including assets) is actually more important for a project like Elephant's Dream than something like a live-action film that happens to have one 3d scenen in it. In the latter case, it may very well be better to use a dedicated video app which is more likely to have video-related features like de-interlacing, capturing directly from a DV source, etc. (though the lack of a good open source editing program on windows is still pretty glaring). But for the former case, it could potentially save a lot of time by having a version of the final edit throughout the whole project to make sure that a final render only happens for what is absolutely needed.

  94. @mangojambo: There is a very nice sequencer plugin for chromakey that is very advanced -- http://blenderartists.org/forum/showthread.php?t=76254

    @Peter Schlaile:

    "I read the Gnonlin sources. The seeking is handed over to the input plugin by passing up a seek message through the tree. The input plugin then performs the seek operation. This is rather naivly implemented and will certainly not work with mpeg files. Looking at my latest version of the seek code maybe it is possible to work around this."

    Ok that does seem a little silly, and I don't even program this type of stuff. I'm not familiar with this type of thing, but I would think that it would be better to have something like a ring buffer set up that makes calls to the input plugin to get frames to fill the buffer with, and then do the seeking on the frames within the buffer. (I'm sure that's not how it's actually done, but even I know that seeking mpeg files is a pain)

    I'm sure they're always open to suggestions and tips on how to better implement the seeking, especially since Gnonlin is targeted towards audio video editing programs.

    "That sounds interesting. Is there example code out there, that shows that?"

    Probably, but I would imagine the folks on #gstreamer at freenode would know a lot better than I do where to point you for that type of thing.

  95. @Peter
    you are right, of course, there really isn't anything to blend as the transform tool already calculates all needed inbetween frames. Which is my 'problem' exactly.

    Maybe I wasn't too clear about what I'd like to achieve. I have two strips that I want to have a similar look: First I stretch an image sequence via speed control, frame blending turned on - everything fine so far. The second strip is a single image which I zoom into via transform, which (correctly) gives a smooth animation. To make this strip look like the previous (with blending), I'd have to 'drop' some of the frames transform calculated and replace them with blends, which isn't possible right now (correct me if I'm wrong).

    I realize now that integrating frame blending in the transorm tool is more complicated than I originally thought because you'd also need some sort of 'strobing' or 'step size' that tells transform *not* to create all inbetween frames.

    A more general approach would be to turn strobe and frame blending into independent effect strips. You could then take transform output, drop frames as needed with strobe and after that blend the remaining frames.

    Quite flexible, but even more complicated to implement - and maybe there isn't any other 'need' or use for this except my case, so I guess we might as well forget about my ideas ;-)

  96. Peter Schlaile on

    @Reuben: I don't think that the seeking _concept_ of Gnonlin is silly - it was the implementation of the ffmpeg-input plugin that was not that nice. The first versions of my seek code in Blender were a lot more complicated. The current one does simple preseeking. Could be adaptable to gstreamer. Have to look into their timecode management.

    @mppic: still a little bit confused. The speed controller can be configured using IPOs to do only blends or strobes or both... ?

  97. It would be nice to see an GL render option in the sequencer preview to allow for scrubbing in your video when viewing a scene strip. I was working on a project with yafray rendering a dense mesh and the preview was draggin bad. If it could preview in gl then we might be able to preview realtime while editing in the sequencer with dense meshes or other resource intensive actions.

  98. vampirezero10 on

    ive been using blender for a year now i am still not that good at it but i know how to use it i go to school for multimedia and i have found that some of blendr features in the video i can compare to final cut pro so why not make them alike. what i am saying is use final cut pro as a template to fix blender if you use it you will see what i am talking about. final cut is onely on mac we have lots of video programs for pc, we should make the video editor like final cut.

  99. Has anyone ever thaught about being able to using the Korg nanoKontrol or another type of MIDI controller for play/pause and scrubbing functionality as well as editing IPO curves in real time.

Leave A Reply

To add a profile picture to your message, register your email address with Gravatar.com. To protect your email address, create an account on BlenderNation and log in when posting a message.

Advertisement

×