Advertisement

You're blocking ads, which pay for BlenderNation. Read about other ways to support us.

How you can help us to make Blender better

50

Thomas Dinges has published a good overview of what YOU can do to help improve the quality of Blender releases.

Thomas writes:

We just released Blender 2.67b [], the second bugfix release for the Blender 2.67 version. In a 2 month release cycle this is not really nice, and we all would like a stable Blender release. I don’t blame anyone, errors happen and they can happen everywhere. But I think that together, we can do better. So the question is how we can avoid those “a” and “b” update releases.

We always release an release candidate, about 1 week before the real release happens, which gives everyone time to test the new version.

Here are 3 questions you should ask yourselves:

  1. Do you download the release candidate (RC)?
  2. Do you test the RC with your everyday blend files, to see if the things you usually do, still work?
  3. If you find a bug, do you report it to the bug tracker, so we know about it?

If you can answer all those 3 questions with yes, it’s great and I want to thank you for that. If the answer is no though, I hope that you will reconsider! Please keep reading.

Link

About the Author

Avatar image for Bart Veldhuizen
Bart Veldhuizen

I have a LONG history with Blender - I wrote some of the earliest Blender tutorials, worked for Not a Number and helped run the crowdfunding campaign that open sourced Blender (the first one on the internet!). I founded BlenderNation in 2006 and have been editing it every single day since then ;-) I also run the Blender Artists forum and I'm Head of Community at Sketchfab.

50 Comments

  1. perhaps you should add a fourth question:
    4) do you feel rc-to-release is short, and extending it would help?

    you know, 1 (typical) week to download, test and bug report (in particular, because you have to describe a maybe complicated situation, produce test files, screenshots...) may be too short time window, while also working on your daily job...

    Marco

    • Actually "4" is "Read the proposal before commenting on it". ;-)
      He does mention your concern in the article - just click the blue link above.

      • Yeah, I did, and "We cannot check all different kinds of webforums, so we rely on you posting bugs into our bug tracker." is the right arguement, going the wrong direction. Who out here has the time to hold your baby? Such a timeline and expectation is rather narrowly arrogant and naive. Not everyone is engaged in graphics day in and day out.

        I agree with the OP, a week is too short a time to expect people to JUST all hear about it, much less test it too. You know. We might have jobs and real lives needs too, not the leisure to be daily following web boards. Facebook! Twitter!? C'mon man. Who in the working world has time for dozens of notices/day on that or Twitter except kids without any real responsibilities? In sum, this release cycle is outrageously immature to expect working people to just grasp it's out there. // Frank

  2. As a software engineer, but not a Blender contributor, I know bugs are sometimes hard to expose. In a normal commercial application development environment, test scripts are prepared/updated for testers to use in their testing. Knowing that not all conceivable combinations of of things can be predicted, it is good to have the known combinations that have exposed bugs in the past listed as part of the test scripts. On Windows, you can use a program called AutoIT that is free and OSS to script an automated testing routine.

    • I'm pretty sure they use some testing tools, but obviously user input is what the software is made for and not machine like testing (a testing program will probably only able to repeat steps its told to do but not go mad and try out all kinds of things with the new tools. Also, quite a few bugs aren't just crashes but stuff not working as it should, which I can't imagine a program would be able to spot.

      Actually, some user testing isn't too much to ask for IMO and it actually makes me happy to see issues I spotted go away in minutes and be gone in the next release :) (this is open source!! and I'm happy to be able to contribute with zero coding knowledge :))

      • I completely agree about user testing, and about how nice it is to see a bug fixed because you found and reported it.

        About program vs. user testing: In the article, there is a link to the SVN regression suite of .blend files, and I think those blends are used to help check for problems other than crashes. So they could be used to complement whatever testing programs you might use to check for errors.

    • Awesome, I will have to look into AutoIT.
      I'm a software tester and at work we use TestComplete but it costs too much to buy a license for home use. I'd love to set up automation tests for Blender.

      • lethalsideparting on

        Hey ideasman! Cool stuff, thanks for adding that to the FAQ! :) I read through it and a few thoughts sprang to mind (since I've done a lot of work with unit testing in the past). You may know most of this already, and it might not all apply to Blender, but I thought I'd bring it up in case any of it was new...

        >> Tests can take too long to run

        This is normally a sign that the tests are doing too much. The key to this is to divide and conquer:

        - Make the tests smaller. Rather than making large sweeping tests like "test rendering this complicated scene", "test physics with this crazy object", etc, it's better to make them as small and specific as possible. Examples might be "test tool A works on triangles", "test function B doesn't crash if given an empty mesh", etc. This speeds them up, and makes it easier to read through the test and see what exactly has broken if one of the tests fails.

        - Make the test data as small as possible. Tests shouldn't ever use large data sets if at all possible.

        - Set the tests up so that they're easy to run individually. For example, if you use cmake's unit testing support, it outputs a different executable for each module you test, and it's simple to quickly run one/a few of them if you're only interested in testing a small part of the codebase. Similarly for Python stuff, "nose" allows you to be very specific about which tests you want to run (eg just one test, just one module, just one submodule, etc). That way you only run the full set when preparing to push a patch centrally, which saves a lot of time.

        - Break the tests up into "unit tests" (small things you can run quickly), "load tests" (larger, longer-running tests you only run before release, to make sure you haven't introduced any performance regressions or crashes in extreme cases), and "UI tests" (automating various clicks in the UI, checking they still do the right thing and don't crash). When coding, you just run the "unit tests" part (since they're nice and fast), and only go for the full-fat "test everything" approach when preparing for release.

        - If necessary, assign a specific machine somewhere that runs all the tests nightly (eg after a nightly build). That way you still get a heads-up when things break, but don't require everyone to run tests all the time.

        >> Tests can take a lot of time to maintain, or just end up
        >> needing a lot of maintenance to keep running with
        >> minor changes in Blender.

        We had this problem on a team I worked on in the past. Since not everyone in the team wrote unit tests, the burden for creating tests fell to a small number of people, who inevitably couldn't keep up with the churn in the rest of the codebase by the rest of the team. So at one point, almost no one bothered at all.

        The way we fixed it was by making it a requirement that if a developer submitted a bugfix or a new feature, they had to include unit tests/fix any tests broken by their change in order for it to be accepted. The tests wouldn't necessarily need to be perfect (we wouldn't require 100% code coverage for example, and we mentored new developers on making good tests), but it meant the burden was shared and that every developer got into the habit of using them. It also meant that the parts of the code that were being changed the most also got tests the quickest! An unexpected bonus was that if the developer left the project, then another was able to get up to speed quickly without introducing regressions because of code they didn't understand.

        Because we required a unit test for each bug fix, as more bugs were reported against our code, the more unit tests we had - the code slowly got more and more stable, and we never had the same bug appear more than once (which was a point of pride).

        We also had a rule that no code could be merged into our master branch if tests failed because of it, and we rolled back commits if this happened - this meant we were always able to release at a moment's notice, which is very helpful when emergency bugfixes are needed. :)

        >> Tests cover areas that hardly ever change (go for years
        >> without being touched), so usefulness is limited, except
        >> when we want to rewrite that code.

        Those tests are sometimes the most useful of all! :) They allow you to catch bugs that mysteriously arise when eg. compiler versions change, or build flags are modified, or a library you depend on is updated, or some library was compiled with a different version of gcc, etc. Most of which happened in our project at some point! :)

        In the case of open source projects, it also allows end-users to test your code when porting to new platforms (eg PS4?), and be sure that all the parts are working as the original author intended them to.

        >> Tests cover areas that we notice immediately if they
        >> break anyway, opening a file selector crashing for eg.

        I do agree that UIs are normally much more difficult to test than back-end code, and aren't always worth the effort. We did have some success with unit tests that would "play back" recorded user interactions (mouse moves, key presses, etc), and check that the application completed them successfully without crashing. It meant that we could check every part of the code base quickly - it's often hard to be sure that a change you've made in one part of the code doesn't have unexpected side effects elsewhere...

        >> Tests may break, but not actually be bugs

        That's normally a sign of badly-written tests rather than a problem with unit testing itself.

        >> Tests may break, but not really be a problem the end
        >> user would ever face (an API test may be attempting
        >> some impossible situation for example).

        True, but you don't know if there are scripts out there in the wild that actually depend on this behaviour (eg scripts that expect a particular type of exception to be raised in this particular weird case so they can catch it and handle it). More than once we ignored a test, only to find that it was there for a reason, and we'd just broken something! :-/ Our attitude in our code base eventually became "if it's important enough to write a unit test for, then it's important enough to fix if the unit test breaks".

        Anyway, hope this was helpful - just shout if you have any questions, and I'll do my best to answer! :)

        Karl

        • @Karl, what you say makes a lot of sense, and I do feel like some of the stuff I wrote goes against conventional wisdom.

          re: divide and conquer

          This is fine, however many bugs we have are not simply that one of our API calls breaks --- its often the interaction of many function calls with specific data (maybe input thats not expected). So we can do both, have tests for single API calls, and some tests for more complex areas of the applications... it CAN be done, I'm just pointing out that its not so simple.
          Just because only some ares can be tested isn't a good argument aganst writing tests at all too.

          re: Tests may break, but not actually be bugs...

          I wouldnt not have written this, except I ran into this quite a few times when updating tests that fail, python can do some things the user cant, so technically you can call it a bug - but in practice its not really, and if it takes 3hrs to fix some bug which is obviously a stupid thing to do in the first place.... (diminishing returns - sigh).

          So I wont reply to each point you made, but I read through your reply.

          I think you would have to attempt to write some tests in blender to really see what I'm getting at - its hard to explain why its difficult without sounding very lame :), but it really is! At least hard to do in a way that makes a positive impact IMHO.

          But something to consider is games companies still employ staff for testing - since blender is full of interactive tools you could appreciate theres only so far you can go to automate testing.

          • Chrome Monkey on

            The phrase "unexpected input" caught my eye here.

            I only know a smattering of Python, I am mostly a PHP guy. Just enough to be curious about how Blender does data validation. Is there an equivalent of "escape output, filter input" that works within Blender's procedures, tools, modifier stack processing and so on? Or is this just me doing some complete apples-and-oranges wrong thinking on this?

          • @Chrome Monkey. The problem is unexpected input isn't invalid input.

            I'm inventing some bugs here... but this is the kind of problems I have seen (in fact most of these examples really happened)

            * the inset-faces tool crashes when it gets a mesh with no faces. (only one selected vert)
            * Weight paint gradient misses initializing mesh weights for the first time but only with F6 redo.
            * Export an empty mesh makes X3D raise a python exception.
            * When you try join objects and the active object is on a hidden layer and you happen to be in vertex paint mode....

            I think you get the idea. I did setup tests to automate testing strange cases like this and I run before every release, but even then its barely scratching the surface.

          • Chrome Monkey on

            I see the point with two and four being difficult to trap and I know it just scratches the surface. Inset Faces not throwing away a mesh with no faces? That sounds like half and half... something that could be validated against a condition like "# of selected faces = 0" being easy to anticipate, but faces selected plus one extra vertex that doesn't belong to a selected face would be easy to miss. Well, was just curious. The sheer amount of variables to juggle is staggering though. I'd burn out in an hour!

          • Chrome Monkey on

            I have no doubt of that. I wouldn't even know where to start! It's way out of my league to be sure. The only testing I know how to do is to the old way... looking through a data dump line by line.

            In case I'm communicating wrong... my question was from curious ignorance, not as suggestions or armchair quarterbacking or backseat driving. :)

          • lethalsideparting on

            @ideasman42
            Cool, thanks for taking the time to respond! I certainly see what you mean that several of the bugs in the release log would be difficult to test for, particularly those that were UI/drawing related. I've noticed that to get the most out of unit testing, code needs to be structured a very particular way (loose coupling between components, able to inject mock logic, etc), and starting testing without that structure in place already can be difficult.

            For comparison, my background is leading the core-pipeline team at one of the large VFX studios in London. It was a large codebase and wasn't a big team by any means (definitely no dedicated testers!!), but we were still able to get pretty decent coverage (>80%) across our codebase, with most of the missing parts being UI code or edge conditions that were too difficult to test for. It definitely saved our bacon a number of times!

            Anyway, I appreciate that you've put some thought into this and that Blender's situation is somewhat tricky. I mostly wanted to make sure you weren't under any misconceptions about unit testing itself, which you're definitely not. So, thanks for reading and cheers for all of your hard work on Blender! :)

            Karl

          • @lethalsideparting, good to hear you managed to get testing working so well, perhaps I have over emphasized the `Its too hard` - aspect of my reply, and should say `It takes time`, I think a developer could easily spend a month or 2 full time improving our testing setup.

            Note that I have worked on this and if others are interested we have tests in: source/tests/CMakeLists.txt - so these could be expanded on.

  3. @Thomas, if user support on the bugtracker really isn't good enough you could probably offer users some kind of rewards (like 20 succesful bug reports = a little feature wish that might be taken into account for actual implementation :))

    This is just an idea, but as i've seen numerous discussions on blenderartists about new features and people who want some kind of voting system for new ideas etc, and this would probably be something that actually works while encouraging people to make bug reports AND make them as clear and understandable as possible.

    • I've had the same idea about a half year ago... But seems that it's not that easy as we think :)
      And actually we get some new features faster as it was expected! Is it not a reward? ;)

    • P.S. Maybe a reward could be something "abstract" like a virtual dinner (or coffee) with any of leading Blender developers (if you can't travel to Netherlands). Or some cool and exclusive t-shirt with Devs' autographs (also with any of Blender's popular characters or some cool render) that is not available for sale.
      Or a printed character or a "rigged" robot, a machine.. something funny and cool in the same time! :)

    • A problem with making the bug tracker different/better is that blender devs are (mostly) not web developers, so its not something we know hot to just add in.

      • I agree, theses virtual rewards kinda go to far away from what the developers are actually doing (make bugfixes and new features)
        The idea of higher priority feature wishes as a reward maybe is also too much, but a basic ranking system would probably be enough of a "reward", like people having a badge on their profile that "certifies" them as a premium bugsquisher, something nice or funny that further encourages people to actually make bug reports, and make them good, (which makes bugfixing faster and easier for the developer.) But even that of course requires some web design.

        It was just meant to be a "solution" to the problem of an overall "laziness" in the community with bug reports, but currently it seems ok, but appears to be getting worse (we had a b release for the first time, right?)
        So Dingto's reminder on this might probably already have improved future bugfixing (we'll see), but I also find a feature proposal system quite important, as I see many people posting quite good ideas randomly in the blender community

        So maybe if bug reports get worse you would probably consider some kind of system like that but as long as it works, just keep fixing the bugs, which you guys are awesome at! :)

        • @randomguest

          Imagine that... Guys will run a stimulation system like that you've mentioned about (badges, ranks and other stuff). So it will possibly become more like a game...
          I've worked in the online games industry for four years. Every "rank" system attracts a lot of "dump" users. It's not a disrespect but it's how it works. The fact is that these users brings "profits" (kill a time actually) only for themselves.
          In Blender's case these profits are the useful bug reports. I don't consider myself as a very useful bug reporter but I feel like I've reported something worthy several times. But I'm sure that accidentally I report a lot of dump reports. Actually this time I don't get all bug notifications by mail (something was changed) so now I will be duplicating somebody's reports... sometimes (I remember that the search feature works not well).
          Now just imagine how much dump reports the "rank-stimulated" users can bring possibly.
          So I assume... if this feature is a chance to get started then it must be started very carefully. Maybe it will be a kind of a pre-moderation on the 1st stage of reporting. So non-skilled reporters will not take a time of Devs. But then some respectful Blender artists should take this task of pre-moderation (to exclude accidentally wrong moderation).

          • @Moolah, thanks for the perspective.

            Im quite wary to gamify bug reporting, isn't it enough that someone looks at the reports and checks on fixing them?

            I was listening to apple developers say they get 2 free support tickets a year with apple after paying up to be allowed to be submit to apples dev store. Not to pick on apple, just a contrast.

            Emailing was disabled because the server was hanging when attempting to mail many users at once. As you can probably tell blender-guys are are bit short of good sysadmins with time to solve such annoying problems.

      • @Ideasman42,
        In the current state I think it's enough just to check all reports.
        Or maybe I've understood something wrong in your words...
        The more stimulus you give to users, the more they do to obtain rewards. I'm just re-phrasing my speech to randomguest.
        Maybe (if this thing is actual) the "prizes" must be more workflow-oriented. My closest idea is "render-hours". You can even make your own little farm for this aim. :) So I mean that most probably the gifts that aren't very interesting for beginners will be not too much desirable for them.
        Free account in Blendernetwork (for few months) - is good and isn't for novices also.
        A week of coding lessons with anybody of your team or close to your team's level (who can contribute in this action).
        Or a week of sculpting, modelling, design lessons for advanced users. In my mind it seems to be very cool. But isn't good if you're newbie here. That's all of these ideas :)

        Yeah... Apple is so apple :D If I got you correctly - they sell support tickets to their devices' users. That's not very surprising (if this is right) :)) The contrast is about the commercial strategy and Open Source. And.. actually I'm not a guru about this.

        Yes, you need some pro to solve this problem... Can you make a call to Blender's community here? Describe it well and as it will be indexed on Google (right?) I think you'll get some help sooner. I hope this is acceptably from all sides.

    • as far as I know the RC is the last official release before the final release is made. However, there might still be buildbot releases in between the RC and the final, but as long as you attach the revision number (or if, what type of custom build it is) to your bug report the developers will be fine with that I think

      But as testbuilds from the buildbot/graphicAll are always the most recent blender version (at build time) they are likely to include more bugfixes than the RC (if newer than the RC)
      However, as close to the release a feature-freeze is made the RC doesn't include any newer features, but anything between RC and release will include the latest fixes made before the release

      • I've wanted to clarify this thing :) I'm not dumb but always need "sure" answer by Devs for such questions.

  4. DolphinDream on

    The 3 is the key! :) I do them all! However, one think that can be improved and encourage people to do all 3 is to revamp the bug tracker front end to make it more appealing. Perhaps a bridge between the Stack-Exchange and the bug tracker would be useful? I must say, the Bug Tracker site is not very inviting as it is. Perhaps allow the user to log in using their google or facebook or whatever credential account, so that the users would not have to create yet another account just to post bug reports. I always though that having a way directly from blender to submit bugs (perhaps together with some screenshots and blender file) would make the submissions a little easier.

  5. I think there should be a testing team at a moment notices due to a release. Having the blender community testing can lead to confusion in too many silly reports due to hardware capacity or graphic card issues. How blender is coded and structure can also lead to misfire in performances. Ok so we all know this crap,but if we care to see blender to become more useful in our projects then let reconstruct the way we support blender. Last note please bring back the Curve
    CRGB in openGL view-port render. It suck in composite it slow and it crashes to mush.

  6. I think it's important to use the early releases and help report the bugs as much as possible. We're a big user base with a small development team. Many hands make light work.

    But to be honest, I think Blender could drastically reduce bugs if we'd have our focus on strengthening existing features before adding new features. I have no inside view of Blender's development and this is just an outside assumption, but it seems to me like the bulk of the bugs come from two frequented main issues:

    1) From implementations of new feature (which involves internal changes) conflicting with the existing internal code of the standing features

    2) From working on new features before addressing all the issues with standing features (thus less time spent on addressing long-standing bugs due to more features to manage)

    I don't know just how accurate those two factors are--I'm just assuming here, from what I seem to observe. It seems to me that either one of those would be resolved if developers would tighten up the code of a feature and finish it together completely before moving on to another new feature.

    Of course, I'm just speaking for myself here. I personally would rather have a complete feature while waiting on other features than trying to wait on multiple newer features being worked on at the same time. I'd rather have a complete and working tool than a release packed with new additions.

    We don't have the luxury of a plethora of developers, and so managing what could be developed with Blender obviously takes some planning. We might benefit from having a community-input system towards what gets developed.

    I've seen a number of Blender users say things like, "Hey, when are you guys going to improve this and that long-standing feature in Blender," release after release, and I think what problems persist continually to core users are the issues to be most addressed.

    For instance, 3D printing features are nice and desired, and I do indeed appreciate them now that they're here, but truth be told, it wasn't really all that high in demand as a feature. I think I personally would've rather seen a robust improvement to the texturing tools, since Blender is the only free alternative for 3D texturing (short of Sculptris' limited texturing). Blender's texturing system doesn't even have a simple bucket-fill tool yet--nearly every other 3D package does.

    Perhaps if we had a system where the bulk of users could express what features they'd like to see sole focus on with development, that might help as well--though, with this idea comes the risk of novices voting for the flashiest new features rather than standard features a professional needs.

    • Part of testing is educating the user base in just why certain development goals are important to achieve first before moving on to others.

      In fact, I think a little more education to the user base is what Blender's needed for years now. We still keep this general stigma of being a crew of novices, simply because, at large, we are.

      We would benefit from having our user base think more professionally-oriented, rather than merely hobbyist oriented. What professional needs get met can only serve the hobbyist's needs better.

    • @Brian Lockett,
      I'm not a coder but I'm almost sure that adding new features that (obviously) bring some new bugs is the intended way of Blender's development.
      New features can bring "upper" some old and almost forgotten bugs and the things that weren't made to the end (nothing is perfect as we know). You will not "feel" these bugs until it will be "illuminated" from a more "viewable side".

      • Well, I don't agree about the sentiment that nothing is perfect. "Perfect" is always in accordance to purpose. If you can complete anything you purpose, you've perfected it.

        If you complete that purpose and go about refining it, you will achieve a perfection. Maybe not an absolute perfection, but one perfection at a time. This is pretty much what I'm talking about--completing and refining features to a completion, one at a time.

        If developers worked on features this way, we'd have Cycles completed sooner. We'd have releases with completed features (completed to their set purpose, that is), and should we need more changes to the internally due to the introduction of newer features, then that's something to be handled with that feature's complete development.

        For instance, in making changes to Cycles, you can better anticipate that changes to the Node Editor code might be done--you can better anticipate where the conflicts might occur, because you're giving one area fuller attention.

        You will run into fewer internal issues when you can count on completing a feature and knowing what other code it interferes with, rather than trying to juggle between managing many codes at once.

        I personally would rather have one occasional releases with completed features than frequent releases, waiting on new features to be done. It'd also drastically reduce the number of Blender applications I have installed--with all these releases, I'm commonly relying on an older release for something the new ones can't yet do (namely, addons and sometimes better stability).

        Of course, testing for bugs is still very important, because even developing the way I'm suggesting--completing a feature at a time with sole attention--can still present bugs.

        But when you have releases with completed existing features sooner, you're giving people a complete product to deal with, rather than testing a part of a feature, waiting for another part to be developed, and which your testing starts all over again. Testing becomes rather tedious when you have to deal with testing so many new releases of Blender. I can't test all of the features of a Blender release at once, esp. when there's another release soon on the wy.

        Though, this is just a thought. I know the situation isn't as easy as just naming a sole solution. It seems Blender often has a development goal of meeting features not common in other 3D packages, and I can respect it.

        But it just seems to me that it's not proving to be the best method for fulfilling a professional's needs. Hobbyists are eager for new features, but professionals can't wait for new features in parts while the refining of existing features gets pushed aside, that's all I'm saying.

        Professionals will generally go somewhere else, like modo. Luxology is clever in that they release completed features with every new release, and add "SP" releases as needed. In fact, I've been highly interested in modo lately. But I'd love to keep Blender in my workflow--if it were easier to manage with Blender's releases.

        • @Brian Lockett,
          "nothing is perfect" isn't a sentiment. It's a side of Asian Eastern culture. I'm not sure how much nations preserves it but true is that Japan's art is mostly in that way. They leave something that makes a picture (as an example) imperfect. Sorry, I'm not strong in philosophy but this can be explained rationally.
          The purpose is relative in many cases. In my current state (for ex.) I don't need SSS right now. So Cycles main development is complete for me mostly (and I highly appreciate all further work and many general features like SSS anyway). The purpose can't be fairly compared with other 3d programs because then we can be dealing with all other features that Blender has and these apps don't have. So I'm getting to the line that any purpose of Blender actually is defined by Devs and Blender's community.
          Cycles will be developing continuously (not my words) and it can't be called "complete" soon.
          As I see GSoC 2013 aims - Blender will be more organized after completing some of it's features.
          Blender is so wide that "jumping" between different code issues isn't a problem. I guess it's a mean to develop it by some kind of an iterative way.
          Nobody (except Devs I guess) is testing all features of Blender! :D
          Now I'm working as a modeller (generally) but I can't test all modelling features anyway. I can quickly run through them but it's not counting as "a testing".
          Brian, professionals support projects (like the new Compositor, the Ocean Sim, BSurfaces and etc.) they need by donations.
          Professionals generally avoids such "fancy stuff apps" like Modo or spend their money, get disappointed then migrate somewhere else.
          Yep, Modo is a kind of a "hate area" for me :)
          Since I've got they have (had maybe) a worst memory management, a lot of buggy "new cool features" (in 401 they haven't any adequate armature systems) and the worst tech. support I've ever seen. Even people on THEIR forums told me the same. And actually there I've read many warm words about Blender :D All Modo release system looks like a kind of "blackmailing" - "you pay us then we will fix those nasty bugs of those cool features that hooked up your ass. You don't? Nevermind - we'll make more fancy stuff and cool tutorials where we'll be continuously praising our new cool features to hook up another money donkeys".
          I've almost bought it :)
          Now a lot of professionals choose Blender and keep working with it. It's not a "fan boy" words and we know a lot of serious examples.

  7. I think given the nature of Blender, we should rename the releases.
    The stable version of Blender is the current version one day before the new release hits, whether it’s an a, b, or c release.

    That version should be named: “Blender: Stable Release”
    It should be at the top of the Blender website for download. It should remain there unchanged until the next release ~2 mos. later.

    The new feature release should be named something like: “Blender: New features test build”
    All versions, a,b, or c should be put there, either on a different page or under the stable release.
    When the next release version is ready, this release, a, b, or c becomes the “Blender: Stable Release”

    This is the only real solution to this.
    Blender should stop promoting the new version as the one the average person should download.
    It looks bad to people who don’t understand Blender.

    • +1 to this. having a "Stable Release" that's one version number behind the cutting edge release seems like it might, not necessarily fix the problem, but make it less embarrassing. If the latest release is listed as unstable, and the previous one is called the stable build, it's less embarrassing to have to do repeated bugfix releases.

  8. Why this embarrassment about a’s and b’s?

    We have graphicall; we have the Blender buildbot;
    the umpteenth bugfix never removes the last bug,
    but we keep using an excellent product and finding workarounds.
    If I can find a workaround to a bug, I don’t usually report it. Is that so bad?

    When starting to search for a workaround,
    the first thing I do is try alternate versions of Blender,
    of which I keep several rotating on my file system.

    If an a or a b comes out, I download it immediately,
    always grateful to Ton Roosendaal for not abandoning
    my somewhat worn-in Macintosh: thank you!

    • Chrome Monkey on

      It's a matter of convention. When I am in a hurry due to someone's "emergency" and I need to download and test some applications I am not familiar with, the word Stable is the most helpful thing that draws my eye. Terrachild and Nburgin are right.

  9. Well, I had tried giving a bug report twice before but I was confused and discourage by the ugliness and the 'unhelpfulness' of the bug tracker site. Going there is not a very pleasant experience. It is just that, that place is unnatural.

  10. For this that are interested in the logic behind what Thomas is mentioning, there is a great series of essays written by Eric Steven Raymond compiled together as "The Cathedral and the Bazaar". These two chapters are especially relevant: "Release Early, Release Often", "How Many Eyeballs Tame Complexity", but the whole collection is a short-enough read for an evening or two. In short, "given enough eyeballs, all bugs are shallow." (I'm guilty of being hypocritical here, because I haven't done any of what Thomas has asked of us... I recognize the advice he gives is sound, though.)

  11. Well, what's the general conclusion from the bugs found? Are most of them traced to the latest changes?

    * If yes, then please publish regression test areas, so that the testers can focus there.
    * If not, only automated testing can help. Or make UT compulsory when anyone commits.

    BTW the issue-tracker has extremely poor query tools (Bugzilla is far better).
    I also faced another problem: I am always logged in the Blender wiki. But if I go to the tracker part of the website, my login is no longer valid. Apparently I have to use another set of username+password. A unified login would be far more convenient...

  12. In my experience, release candidates are often ignored by people who feel they need a stable version to work with. They won't pick up a tool until someone else has worked out the kinks. That's the price of competing against commercial mindsets - they don't touch something because they expect the price paid for the tool will eliminate most of the errors they should experience when using it.

    In the case of Blender (and open source in general) that's not the case. The tools only improve when real-life users attempt to use it for real world projects. The better the user working with the tool, the more well tested the tool, the easier the tool is to use. I'd imagine that's the reason there's a Blender movie project - to push development through the stresses of a production environment. So that real artists and users can swear up and down at the thing while glowering about real deadlines and trying not to cry and curl up into a gibbering mess. It certainly gives developers an eye opening experience to how they need to change their code to make the tools useful.

    Corporate systems use a fraction of the profits to drive research and development into the next iteration of the system, and end-users get a tool that is "complete" out of the box for the most part. End users of a paid for tool only do beta if absolutely necessary... a feature that cannot be lived without for example. And then they feel like guinea pigs because things that *should work* are *broken* and while the support from the company is there to help them, it feels like they're doing all the work to make a functional product and paying for it at the same time. Its probably one of the worst things in the universe for a non-programmer to feel when trying to meet a production deadline.

    When a person uses blender as a workflow tool, they have that mindset screaming at them in the back of their mind: "I'm not going to use this for anything serious until its more stable." There's a lot of "cost" in the user's mind - whether its real or not. I guess what we need is a stable release and an RC release paired together and easily accessible so that users can say "Ok, today I'm going to give the RC a chance with my project, knowing I have a stable release as a fallback point."

    An even better idea would be to have the last stable release packaged WITH the RC - and if the RC crashes a bug report is generated and the blend file recovered and opened in the older version if possible. If it crashes in the older version too... then generate a bug report for that.

    Another thing to keep in mind is that Open Source tends to be modular from release to release. Unless you're recoding core functionality, you can make newer functionality optional for the user. A stable release could have beta and alpha modules attached and be user enabled so that a feature can be tested unless that feature requires a newer "core". That way the people who could really use a new feature can access it while others who don't need it can leave it disabled until its officially part of a release.

    Of course, all of this has to be made accessible to the public. If the end-user can't identify a way to help a project along in the first 30 seconds of looking at your website, then you're probably not going to get that person's help. They'll wait until there's something that they can use that does what they want without the investment of time required to deal with an incomplete tool. And that user, knowing your product is free, may decide that if things are too difficult to find out now, they'll just use Maya or something their office paid for instead - something with support that doesn't depend on community generosity. They'll keep an eye out for the features they need, but they won't actively use your tool until they can be sure that it will work for what they require. And that's even worse.

    These are just some thoughts, your mileage may vary.

Leave A Reply

To add a profile picture to your message, register your email address with Gravatar.com. To protect your email address, create an account on BlenderNation and log in when posting a message.

Advertisement

×