Quantcast
Channel: Libre Graphics World. Blog
Viewing all 328 articles
Browse latest View live

The complete story of Paris-8 university going for Krita, Blender, Natron

$
0
0

It's not so uncommon for an educational institution to adopt free software in its programs. But last month Krita's website was taken down by curious minds of the Internet, hundreds of comments were posted in various forums, crazy theories evolved, and old myths about free software had a new go at curious minds of the Internet in a giant positive feedback loop.

Editing in Krita, from prodiuction of Le Désert Du Sonora short movie

The news of Art and Technology of Image department in Paris-8 university switching to Krita, Blender, and Natron caused quite a stir in CG communities around the world. Some discussion were quite fun to read, like this one on Model Mayhem:

— Sounds like they're short of cash and trying to make a virtue out of a necessity. I suspect this will last for a year or so till numbers starts to dwindle, with potential students going elsewhere to be given proper training with industry standard software...

— As professionals, although our day-to-day job is busy, we should really be open to how new players - even Open Source ones - solve old (and new) problems. The paradigms might be shifting...

In some other cases discussions (Hacker News, Linuxfr.org) quickly evolved into myths debunking between users and developers of both Krita and GIMP.

Well that escalated quickly

But here's a problem: not enough information about the decision in the original post at krita.org.

Inadequate support from Adobe? In what way? Being pushed around to make choices that go against teachers' beliefs? What does it mean? How did this go for the students? Were they interested? Did they rebel and demanded being taught software actually used in VFX studios? Questions just keep on coming.

So LGW attempted to fix this the only way we know: by interviewing François Grassard, a teacher at Paris-8 and a CG professional with 17 years of experience in the industry who's the primary person behind this initiative. It's a long read, so sit tight. We'll get to the bottom of it, that's a promise.

François, thank you for finding a slot in your super busy schedule. Let's start with assessing the situation with Paris-8, its ATI department, and Adobe. What exactly was meant in the quotes about inadequate support and being pushed around?

First of all, I'd like to clarify some points about this news item at krita.org, because according to my friend David Revoy, this post on the blog seems to have gone viral on the Internet since a few weeks, and I read a lot of comments on several social networks and forums, whose authors are far, far away from the facts.

Here's a disclaimer: I've been a user of Adobe products for a long, long time. I've been using After Effects since its first commercial release about 20 years ago. I love this software and I made so many animations with it in so many cases (motion design, TV identity, VFX). As a freelance artist, I use Blender and After Effects together for about 10 years. For me, it is one of the most efficient duets I can use to get my work done fast and efficiently.

So, as a compositing teachers at Art and Technology of Image, why did we decide to stop teaching Adobe products to our students? The answer is a single word: budget.

Editing a shader in Blender

Editing a shader in Blender, from the "Le Désert Du Sonora" short movie production

Our university is among public schools that are mostly free in France, except registration fees that are pretty acceptable in comparison to fees in some other countries.

To acquire all licenses of software that we teach, we have to rely on our own budget only. And our budget is ridiculously low. So we always have to talk to software vendors about the price of licenses.

Some companies are pretty fair with us, and we can, for instance, buy all Maya licenses we need. In other cases, companies can't lower the price of their products. That was the case for Adobe. We asked for a discount, they said they can't do that. End of the story.

We don't have to blame Adobe for that. As a commercial entity, they have to make choices and decide if they can accept this kind of a deal or not. Last year, I was an After Effects and Nuke teacher. Since this year, I start my compositing courses with Nuke only.

So, after the meeting with Adobe we discussed this problem and tried to find solutions. When we said "we don't want to make choices that go against our beliefs", it means we refused to choose the worst solution in this case: crack Adobe's software and continue to work like before.

The philosophy of ATI is to teach our students to be flexible in any kind of situation. That's why most of our previous students now work as TDs [technical directors] in several animation/VFX companies. We always try to find good solutions — technically, ethically, and compatible with the budget of a production.

So what did you decide to do?

I called my friend David [Revoy] who helped me discover Krita a few years ago. He kindly agreed to come to Paris to introduce Krita to our students.

After 3 hours of showcasing, our students were mostly impressed by the capabilities of the software. Of course, it's not so easy to learn new software when you have used Photoshop for such a long time. But they were fine with that, because we were lucky enough to have wonderful students.

As the result, we switched from Photoshop to Krita. We didn't do it for some kind of revolutionary ideology, even though, as you know, we are experts in revolution here in France.

We did it to continue using high quality software, and Krita was the best option. I think it's going to be a pretty good experience for everybody at ATI. And we know that a lot of other animation schools in France are watching us.

This was, in fact, one other puzzling bit in the original news text for those who aren't CG professionals living in France. Could you please tell more about RECA and their involvement?

It seems that we are not the only ones to have difficulties to deal with Adobe (or other vendors) regarding the price of licenses.

RECA stands for "Réseau des écoles françaises d'animation" ("Network of French schools of animation"). It's a group composed by 25 schools of animation and visual FX that are well known in France for the quality of their work.

The goal of this group is to discuss plans and strategies of each school and ways for us to deliver the best knowledge to our students. We share ideas and methodology all year long during several meetings.

Around the month of June 2014, we met up and talked about the license prices for each product. We discovered that buying all licenses for all of the students was difficult not only for our public structure, but also for private schools.

When we exposed our plans about Blender, Krita and Natron, a lot of schools in the RECA said that they were really interested too. But it is obviously more difficult for a private school to integrate this kind of experiment, when they have strong partnership with some software vendors.

Compositing a sequence from "Le Désert Du Sonora" in Blender

As a public school, we have more flexibility to do that. This means that we are the first school in the RECA to integrate open source software. But if this experience is a success, a lot of other schools could follow the same path.

Originally, the news focused on Photoshop/Krita, but there's also Blender and Natron involved. What's the story with them?

I started to learn Maya, 3ds Max, Lightwave, XSI, and so many more applications since their first releases, but I've been a Blender user for years now, and it's what I mostly used for all 3D these days.

We don't have an official teaching schedule for Blender at ATI. We sometimes organize master classes about specific topics, such as organic modeling, character animation, dynamic animation for VFX, etc.

For 3D, we've been primarily teaching Maya since 1998. But since a few years, most of our students arrive to ATI with some preliminary 3D knowledge on Blender. And for a lot of them, they already have pretty good skills. And that's a really interesting point! Because even if all instructors at ATI really love Blender, the choice of this software came naturally from our students!

You mean, students who'd probably die for a job in a big studio actually wanted to use Blender?

They make choices on their own. We never pushed them to use it. That's why a lot of projects at ATI massively use Blender for 3D purposes — sometimes in conjunction with other 3D packages, and sometimes alone.

And Natron?

Yes, after the switch from Photoshop to Krita, we had to resolve the issue of finding compositing software. Previously, we used After Effects to introduce compositing to our students. The second software we use is Nuke. It's a powerful tool, but it's sometimes difficult to understand all the technical theory behind it.

At some point I heard about an open source compositing software named Natron. Even better, since the team is French from the INRIA lab! I contacted them, and they promptly replied. The project was not so stable back then, but it was being developed at an incredible speed and it was becoming better and better with each day.

I tested it a lot during the summer and made a lot of suggestions and bug reports. The team has been listening to users really patiently and with a lot of interest. So I called my friends at ATI and propose to use Natron to introduce compositing to our students. By coincidence, Natron shared exactly the same philosophy and shortcuts with Nuke. It just couldn't get better, perfect timing!

ATI focuses on teaching our students how to use 2D and 3D software, but also how to code their own applications with C++, C#, and Python. Cédric Plessiet, who is one of our resident code masters and teachers at ATI, now teaches them how to write OpenFX plugins for Natron. And the best part is that these plugins are fully compatible with Nuke, Fusion, Resolve, and many more commonly used production tools.

By the way, our students dived into Natron, and a lot of them decided to use it in their own future projects. Sometime—because of curiosity, sometimes—only because it's open source, and sometimes—for both reasons.

Compositing a sequence from Le Désert Du Sonora in Natron

Compositing a sequence from "Le Désert Du Sonora" in Natron

The news mentions that the whole Art and Technology of Image department has switched to Blender, Natron, and Krita. How many teachers are involved with this?

There are 3—4 teachers involved with teaching how to use these applications, that's including me and sometimes some external professionals. But the same people also teach at the same time other software like Maya or Nuke. We mix them all together, because all the techniques behind applications of the same kind are nearly similar.

Understanding those techniques is what we teach in the first place. Using actual software comes afterwards. We think it is the way to be more flexible and to switch easily from one software to another.

One correction, though: I don't think the term "switched" is right here. "Integrated" would be more appropriate. And in some cases — "deeply integrated", depending on the students.

How is that?

We are really flexible about the software our students can use during their intensive projects, as far as licenses allow. We try to listen to them, to their suggestions and ideas. Even though we have to prepare them for their professional life, we are an Art university where experimentation has to take place.

We still teach a lot of proprietary software at ATI: Maya, Houdini, Unity and Unreal Engine for real-time applications, and games, and much more. We keep in mind that our students have to know most of the software that is being used in big studios.

But at the same time, we try to teach them how to integrate some alternatives, such has Krita, Blender, Natron, DJV View, FFmpeg, and many more. Depending on ideas that each student has, this integration of open source tools in his/her pipeline will be partial or exclusive. We try to teach them using as diverse software as we can.

But it's not so difficult, because most of our students are really open-minded and sometimes teach new stuff to each other during their own "internal masterclass", during the night or the week-end :)

So you experiment a lot?

We do, all the time. We know we are in a complex triangle involving schools, companies, and software vendors. Each part influences the others a lot! Vendors want to push their software into schools, schools want to prepare their students for the software they will use in studios, and studios buy software known by students.

In this situation, we don't want to break the system. We only want to integrate new concepts and software in this system. We think open source software has a place in productions' pipelines. Partially or in a most radical way, depending on the size of the studio and the kind of the project.

The news also mentions that a three-week intensive project that involves using Krita, Blender, and Natron. Are there any other courses that will be rewritten to use free software instead of the proprietary counterparts?

The project we talked about at this time is now finished. It is the work of a group of three students — Bérénice Antoine, Clément Ducarteron, and Gaël Labousse. An experimental short film named "The Desert of Sonora" ("Le Désert Du Sonora"). The team who completed this project already won a Blender Suzanne Award with "Jonas" short animated movie a few months ago.

In both case, these movies are an artistic experiment, but also a technical experiment. For "The Desert of Sonora", they choose to only work with open source software to test the efficiency of a complete free software based pipeline.

Gaël Labousse, one of the students, told me that they are currently writing a report about the creation of this project, to expose advantages and difficulties of this kind of pipeline: what was cool, what wasn't, and what could be improved. I think it's a pretty good approach. They don't idealize this open source pipeline, but instead they stress-test it and change things that are wrong with it.

Other groups during this three intensive weeks use Blender and Krita, but only this group decided to only work with free software. Fortunately, the results of their efforts is quite impressive. That's really encouraging.

As far as I can tell, you have tons of experience using and customizing/automating After Effects. You did a video course on motion design with After Effects 5.5 in the past, and you currently work for a company that automates Ae workflows. How and why did you start exploring free software options?

Because my father was a programmer, I start to write code at the age of 7. It was difficult in the early 1980s to code graphical stuff with my old ZX81. But it was the time when everybody tried to experiment. It was an era right before the boring MacOS/Windows battle. New computer came out each month, and neither were compatible with the others. It was obviously not really productive, but it was fun to test and discover new alternatives.

I think I still follow this philosophy of always looking for new ways to do things. I'm probably just a paranoid guy who tries to find alternatives all the time, even though I'm happy with the software I already use.

The reason why I do that is because sometimes changing one's habits and point of view permits to resolve previously unsolved problems. That's why I decided to explore free software after having used proprietary software for many years.

The reason why I use libre software is not because it's gratis. I decided not to use Maya or 3ds Max anymore and mostly switched to Blender, because it is an efficient tool for my job.

So, really, you are just a pragmatic guy?

I'm a very pragmatic guy. If you give me inefficient software to work with, I'll refuse to use it, even if it comes at no cost. I choose the software I use not because of the price, but because it's good for my business. Blender, Krita, Natron, FFmpeg, DJV View, and many more apps can be used in production. I use them every day! The way we can use them in a pipeline depends on the project, of course.

As you said, I work for the company called "Ivory" on Automate-IT — a solution based on After Effects to automate TV promos and motion graphics for TV channels. Why After Effects? Because at this time, it is the software used by 95% of TV channels for this purpose and most of the projects done by artist in this area are created in After Effects.

But because we work a lot for TV channels, we see the part of free software like Blender growing every day in any kind of production. I think television can adopt free software more quickly than the cinema industry right now, because of the size of the projects and teams. But it's a work in progress, where problems are usually more psychological than technical.

Do you know that when you sit in a modern digital theater to watch a movie like Transformers, you have 80% of chance to have a open source player in your back, right inside the dark room behind you? Take a look at this page of the leader for Digital Cinema Player, and read the really small grey line at the bottom. Your favorite movies are played by FFmpeg!

How much experience with Nuke or other node-based compositing apps did you have prior to using Natron?

For compositing purpose, I previously used Commotion, Combustion, Shake... and After Effects—since its very first release. Today, I mainly use After Effects, Nuke, Fusion, Natron, and sometimes Blender—for really simple cases.

That's not because Blender hasn't a good compositor. It's really powerful, but it's only about efficiency. Time versus money. Deadlines in this industry are shrinking more and more each day. We have to find the best solution to a specific problem or job.

One of the scenes from Le Désert Du Sonora in Blender

One of the scenes from "Le Désert Du Sonora" in Blender

How often is free software the best choice, in your experience?

Sometimes it's the best option, sometimes it isn't. I try to spent a (huge) part of my time to improve free software by giving my feedback to devs and develop some solutions around free software to create my own alternative.

From my point of view, the Blender compositor is not the most fast and efficient way to work, and I prefer to work with other solutions. That's why I'm really happy that Natron is now available! For me, it's the first real alternative for high quality compositing. Of course, it's still at an early stage of development, but I'm quite impressed by all the features that are already available, and it's after just one year of development by two programmers!

Earlier in the interview you mentioned that you use Blender in conjunction with After Effects. Why?

Well, After Effects is really powerful, but lacks a lot of features in some specific areas. For instance, particles simulation has to rely on plugins such as Red Giant / Trapcode Particular. Without this plugin, it is really difficult to quickly create complex and good looking particles simulation.

But I can use the incredibly useful particles system of Blender, render out images sequences, and integrate my particles in After Effects. Thanks to Bartek Skorupa, I used the After Effects exporter addon for Blender each day!

Same about integration of 3D objects right inside After Effects. Cinema4D Lite that is now bundled with Ae could be a solution, but it has a full raytracer which is most of the time pretty slow, compared to a traditional After Effects composition. Elements 3D plugin from VIDEO COPILOT seems to be the only good solutions for that, even if there are some limitations when we compare it to a complete 3D package like Blender.

And with Blender, I can export my camera and empties to camera and null objects in After Effects in a flash! It's a really efficient way to work. It saves me a lot of time. Most of the time, when I have to render an animation, 50% of the work are done in Blender, and the other 50% are done in After Effects with a lot of tricks to speed up the process.

Because I use Blender, the After Effects limitation about the inclusion of 3D objects in composition is not a problem. I export a Z-Depth pass from Blender to After Effects, sometimes a mask in greyscale, most of the time directly fetched from the OpenGL renderer to speed up the process again!

If you and only you can manage the 2D and 3D parts at the same time, After Effects has from my point of view, no limitation. The Blender/AE duet helps me work really, really fast.

For people who only manage the After Effects part and don't touch 3D at all, the software obviously needs some extra stuff.

So, what can we do in the open source/free world to overcome this kind of problems and limitations?

Well, we can take a part of our time to develop alternative solutions. For instance, 3D workspace is planned to be integrated in Natron in future releases. But when can we hope to get a full particles system in Natron? We don't now. Maybe really soon, if we can pay someone to develop it.

Still from Le Désert Du Sonora

Still from "Le Désert Du Sonora"

While doing a few "best commercials made with Blender" curated lists, I spoke to various studios using Blender + After Effects. It seems that availability of custom addons for Ae is one of the major roadblocks to adopting Blender and Natron for compositing, or just Blender for everything. Do you see some way to deal with that? Or do you think it's not really a problem?

It's always the same question: who will contribute to a project like Natron or any other kind of free software? We can think about severals solution like documentation, tutorials, code and, of course, money. If we want to include free/libre software in pipeline in a more radical way, we have to detach the word "gratis" from the concept of open source software.

If we want to have a real alternative to each software we use today in production, we have to pay for that. We have to consider the work of talented developers who provide a new vision of modern production. The goal is not to break the system in place, but to create an alternative shaped by the artists and for the artists, even if they can't code anything.

We speak to Alex [Gauthier] who is the main developer of Natron about the software and its future every day, defining a realistic roadmap. Alex said to me a couple of days ago that I use his software better than he does. But he develops so fast that I look like a snail next to him!

So we have to remember: it's a team play where we can combine the skills of each other in a specific domain. Using graphic tools is not only a technical topic, it's all about methodology. With a good methodology and good knowledge of your work, you can usually switch easily from one software to another. I see in free softwares a real potential to re-establish a direct communication between developers and users. But it could take a lot of time, even though I'm confident in the future.

ATI students during the talk by David Revoy

ATI students during the talk by David Revoy

When I take a look to my students, I can see that learning mainstream 3D applications and experiment with free software at the same time is not a problem. They can handle that, because they are curious enough.

This way, without pushing them at all, open source applications are slowly taking place in production pipelines. Of course it's gonna start with small projects, because it's easier to create a new pipeline from scratch in this case.

Isn't it a dream of every second free software enthusiast that studios like Weta Digital step up the game and start using free software for content production on their Linux boxes currently running Maya and Houdini? Or that small studios using Blender et al. would make it big in the box office?

I usually hear some people saying that they don't understand why big studios can't replace software like Maya in a snap! Obviously, all those people never worked as technical directors in such a big studio. That was my job for about ten years before I switched to freelancing.

When you have to manage a project with more than 200 artists and a lot of complex shots, you have to plan everything, sometimes 2 years before the first artist arrives in the studio. Once the project is launched, you can change some details in the pipeline, but you never break it out completely.

Excerpt from a short animated movie Herakles, Aux Origines De La Crau, by Les Fées Spéciales

Excerpt from a short animated movie "Herakles, Aux Origines De La Crau" by Les Fées Spéciales

We have to prove that we can use free software and create a solid pipeline on top of it. Some new studios try to it, like the newly born Les Fées Spéciales in France who decided to only use open source software, but also to improve it in a production environment. It's how we can make sure that free software is being improved through a constant exchange between users and developers. I hope to see this kind of studios more and more in the future.


Inkscape to organize its first community-funded hackfest

$
0
0

The Inkscape team is raising funds to organize a 3 days long hackfest in Toronto this April, right before the annual Libre Graphics Meeting event.

The idea is the get ca. 10-12 Inkscape contributors into a single room and let them plan their further work on Inkscape, do actual programming, and, ultimately, make Inkscape better.

The Inkscape Board has already decided to use $10,000 from the project donations' fund to cover travel/accommodation expenses for both hackathon and participation at Libre Graphics Meeting, but more money is likely to be required as not every team member lives in Northern America.

Why is this fundraiser a big deal?

Despite of meeting each other at LGM every other year or so, team members haven't had proper quality hacking time together in a face-to-face fashion, ever. And while videoconferencing might help, it's not all it's cracked up to be.

So who's coming, and what's in it for the community?

The likely participants so far are:

Martin Owens. He's been getting increasingly involved with the project over the last several years, doing all kinds of work, from adding new handy features and fixing bugs to programming the new website.

Tavmjong Bah. For years Tav had been orbiting the project as creator of A Guide to Inkscape — the reference for Inkscape users. Eventually he started doing programming, mostly to improve SVG1.1 and SVG2 compatibility, and then he became Inkscape's ambassador in the SVG Working Group where he makes sure that SVG provides features that are in demand by illustrators. Read his blog post for details on a recent SVG Working Group Meeting in Sydney.

Jabiertxo Arraiza Cenoz. He joined the project only last year or so, but he's likely to become the next Inkscape superstar due to his work on live path effects, most of which you will be able to make use of, when v0.92 is released. If you ever wanted fillet/chamfer tool in Inkscape or had the feeling that Spiro curves should be visualized as you draw them, you absolutely want him at the hackfest, because it's what he already did. Imagine what else he can come up with!

Bryce Harrington. One of the founding members of Inkscape, currently doing mostly boring organizational work that, nevertheless, has to be done to keep the project's gears rotating smoothly.

Joshua Andler. He is one of Day 1 Inkscape users. Apart from being another Inkscape Board member, he's been organizing Inkscape booths at SCALE (Southern California Linux Expo) since what feels like the dawn of times.

The agenda of the hackfest is subject to changes, but here are some rough ideas that will be taken into consideration:

  • roadmap planning, how new major releases can be cut faster;
  • early start on redesigning the extensions system;
  • looking at what can be done to improve print-ready output (CMYK, spot colors);
  • various usability improvements.

The actual agenda will become more definite towards the beginning of the hackfest, when the team has a better understanding, who exactly is coming, and what things these people are interested to work on.

The idea is to make Inkscape hackfests a common way to speed up development. But since getting people from around the world together is not exactly cheap, this is where your support will play a major role.

Sounds advertising? Go ahead and donate to help organizing the first Inkscape hackfest.

Krita 2.9 released with tons of new crowd-sponsored features

$
0
0

Krita Foundation released its most important update so far: the majority of new features were sponsored by the community via successful crowdfunding campaign in 2014, sales on Steam, and donations.

The highlights of the release are:

  • new Perspective transform tool with vanishing points highlight;
  • new Cage and Liquify transform tools;
  • non-destructive transformation via transform masks;
  • various improvements in deep painting;
  • better blending in the Mix brush and assorted improvements in brush engines;
  • new assistants to help drawing parallel and infinite lines, as well as objects in perspective;
  • updated port of G'MIC to provide all the new features including artwork colorization;
  • better support for PSD and EXR files, newly added loading of raw files, newly added support for r16 and r8 heightmaps;
  • the version of Krita for Linux now relies on colord to use unique ICC color profiles per display.

You can learn more about the new features in the detailed release notes.

Apart from bugfixing, immediate future plans include finishing Photoshop-like layer styles, as well as support for this feature in PSD files.

In early March, the team will also start completing the Qt4-to-Qt5 port and start moving towards KDE5 Frameworks which will help getting more stable and slim builds for OSX/Windows users with less dependencies.

There's also a new Kickstarter campaign in the planning. According to Boudewijn Rempt, two big topics of the campaign will be performance optimization and animation support, and stretch goals will be suggested by the community again.

Finally, Dmitry Kazakov isn't the only full-time developer now: Boudewijn is currently sponsored to work on Krita for two days a week, and he hopes to be able to spend even more time on the project after the next kickstarter.

OpenSCAD 2015.03 released with text objects support

$
0
0

Marius Kintel released a major new version of OpenSCAD, a 3D solid modeling application popular with the makers movement and 3D printing communities.

OpenSCAD is different from the usual solid modeling CAD software in a way that instead of visual modeling you use a simple declarative programming language, hence "The Programmers' Solid 3D CAD Modeler" slogan. Complex objects are constructed from solid primitives such as cube, sphere, cylinder, polyhedron etc., extruded from 2D objects etc. These days, of course, there is an OpenSCAD workbench in FreeCAD for more visually inclined people.

Just to give you idea, here's e.g. a platform for a micro spider hex multirotor, 3D-printed in PLA from a design available in the OpenSCAD file format on Thingiverse.

Micro Spider Hex Multirotor

The first intersting new feature in v2015.03 is support for text, especially since OpenSCAD relies on harfbuzz, free/libre OpenType shaping engine. What it means is that OpenType features like contextual ligatures are supported. A somewhat tired example here is, of course, the much abused Lobster typeface which has quite a lot of these ligatures:

Ligatures in OpenSCAD text

The command for that is as simple as:

text("open", size="4", font ="Lobster Two");

Of course, if you are going to 3D-print that, you need to merge these ligatures. It's unlikely that there will be manual kerning, but you can fix your model the old way, by splitting a word into several blocks, then translating them accordingly like this:

text("op", size="4", font ="Lobster Two");
translate ([10.9,0,0]) {
    text("en", size="4", font ="Lobster Two");
}

The result is:

Ligatures merged

The other benefit is that complex scripts are supported. An unparalled imaginative example here is the word "Devanagari" written in, well, Devanagari:

For information about text attributes such as vertical and horizontal alignment have a look at the documentation.

One more new OpenSCAD function is offset() which moves polygon outlines outwards or inwards by a given amount. You can control if you want rounding of corners (and by how much), straight corners, or chamfers.

Offset example

There are a couple of new functions and some improvements in the existing ones, like e.g. using PNG as input for a heightmap in the surface function.

PNG as heightmap in OpenSCAD

The user interface got a few improvements as well: new startup dialog to quickly open recent files or examples from a library, new QScintilla-based code editor with folding support, SVG and AMF exporting, and more.

There is a less verbose yet more complete list of changes on GitHub.

Builds are available for Mac and Windows, there's a PPA for Ubuntu (not updated yet), and source code is there for everyone (a note to Fedora users: unless you like jumping through hoops before using software, build with Qt4 and the respective version of QScintilla).

Kimiko Ishizaka and MuseScore team release Open Well-Tempered Clavier

$
0
0

It is done: score and recordings of J.S. Bach's the Well-Tempered Clavier Book 1 as prepared by MuseScore team and played by the pianist Kimiko Ishizaka are now released into public domain.

The Open Well-Tempered Clavier (OpenWTC) project was launched in late 2013 as a successful Kickstarter campaign, with 904 backers who pledged $44,083 of $30,000 goal. The fundraiser was boosted by 12 promotional concerts that Kimiko played in USA and Europe throughout November 2013.

OpenWTC follows another successful community-funded project, Open Golberg, where the same team prepared and released Bach's Golberg Variations into public domain — both the score and the recordings.

It all takes time: working on performance, picking the right instrument, recording in the studio, post-processing, typesetting the score etc. And doing that for all of WTC Book 1 is no minor achievement. The outcome is the 48 Preludes and Fugues in public domain:

  • audio in MP3 and FLAC (click 'Buy', then insert at least a zero);
  • sheet music in PDF, MusicXML etc.

If you are a MuseScore user, you can also load the score directly into the application.

What's Well-Tempered Clavier and why it matters?

Among classical music connoisseurs, the Well-Tempered Clavier Book 1 (WTC, or "the 48" for short) is widely regarded as one of the most influential works by J.S. Bach. Here is why.

For a long time instruments used to be tuned in such intervals between notes that transposition (playing a melody in a key different from the originally intended one) usually produced a melody that was clearly out of tune. Finding the right intervals was an interesting mathematical problem to solve, and it was done in the 17th century by Andreas Werckmeister.

So while J.S. Bach didn't invent well-tempered tuning, the 48 was his major, if not defining contribution to making it popular, as the 48 was pretty much The Music Theory Bible for generations of composers.

Today we have the privilege of sneering at the (now) classical tuning and going to microtonality, because we have this solid foundation on which music as we know it is built. For instance, the kind of progression you find in Prelude in C major (BWV 846) had been cool long before Klaus Schulze et al. turned up in late 1960s and made it the flesh and bone of electronic music.

Historical value aside, the 48 is simply beautiful and elegantly sophisticated music (with score laid out in up to four voices, yet played by a single musician). If this is the first time you are listening to WTC, I officially envy you, because are about to discover something very special.

Since this clearly can't be left at that (clearly!), LGW spoke about both the musical and the tech sides of this adventure to Kimiko Ishizaka-Douglass and Robert Douglass (team Ishizaka), as well as to Thomas Bonte and Olivier Miquel (team MuseScore).

The music

Kimiko, your rendition of Well-Tempered Clavier has some distinctive features like comparatively slower tempo in a number of pieces, and more dynamics variation compared to what Gould, Richter, Pollini, and others did in the past. To me, the overall impression is that your performance of WTC sounds more intimate. How did it come to that? :)

Kimiko: I wrote my liner notes for the CD booklet, but here are some more ideas. And by the way, not all of my tempos are slower! The A-Major Fugue, for example, is the fastest rendition I know.

Liner notes

The most important part of my performance is the organization, through articulation, of the thematic materials, and the emotional meaning.

All of the dynamics and the tempi derive from those goals, which might mean that there are fewer extremely fast or extremely slow tempi, and I don't push the piano into fortissimo thunder that you might expect in a piece by Liszt.

I tried to make the music sound clear, and natural, and to show all of the incredibly brilliant connections that Bach made. I do use the dynamic range of the piano; I'm not trying to imitate a harpsichord.

Furthermore, it was very important to me that the music is contrapuntal, meaning composed of different independent voices.

Later, music evolved to consist of more chords and melodies, but in Bach's music, especially the fugues, there is always a definite and clear sense of a number of independent voices each telling their own story. That means that each voice has its own articulation, phrasing, and dynamics.

You can see some of the thinking that went into this here:

Kimiko, how have these successful community-funded projects, Open Goldberg and Open Well-Tempered Clavier, affected your perception of performing and recording classical music? How much has changed for you in terms of interaction with your audience?

Kimiko: The two most successful aspects of these projects have been the amazing direct connection that has been built with my audience and supporters, and the high level of control over the recording process that I enjoyed.

I've been astounded over the past years by all of the positive feedback and support that I've received from people who really care about what we're doing. Here is an example from an email that I received just today:

The effort of your team is so highly appreciated. You're doing priceless work for mankind. I lack words. JSB would be proud! Thank you for all the tears of joy!

When I read that and the hundreds of similar messages that people have sent to us since 2012, I am very happy, and it gives me strength to continue. It means that no matter what crises may exist in the (classical) music industry, as long as performers and audiences can connect, it will all work out.

This recording of the Well-Tempered Clavier builds on my experience recording the Goldberg Variations, and it is a more perfect representation of what I want to achieve.

The advantage of running these projects the way we do is that we have complete control over all of the elements. We re-evaluated our choice of studio, and after a long search, decided to again go to Teldex.

Kimiko performing

I was able to go to Vienna and play this Bösendorfer 280 before it got shipped to Berlin for the recording.

I was able to invite Anne-Marie Sylvestre to come to Germany as the recording engineer, and we were able to go for a very specific recorded sound that I am very happy with.

All of these choices were mine to make as a result of the support from our fans.

Throughout the project, you worked with a lot of people to whom the idea of community-funded recordings was likely to be novel. I mean both PARMA Recordings, Teldex Studio, and others, not to mention the audience at 12 concerts you did to promote the campaign. Did you find it difficult to get the message through?

Kimiko: The PARMA team has been great. They were attracted to the idea from the beginning (in stark contrast to some other labels who wouldn't touch the project), and they were also the most professional in terms of coming up with a marketing plan, holding an extra round of legal review (just to make sure they understood the ramifications of a public domain recording), and in setting up worldwide distribution through Naxos.

The Teldex team is likewise very comfortable with the model, but for them, their main interest is in creating a magnificent environment in which a pianist can make and record music.

Robert: One thing that was difficult to convey in the campaign phase was the stretch goals we set involving making 50,000+ Braille scores out of the MuseScore.com digital library.

I think that for many people it's hard to imagine how necessary it is to be able to read scores to properly play classical music.

It's likewise hard for a sighted person to imagine what it's like being blind, and it's near incomprehensible, then, to understand the urgency that is felt by a blind musician searching for classical scores to read who finds out that basic stuff like Mozart Sonatas aren't available in Braille, not because it's technically impossible, but because too few sighted people have cared enough over the years to provide them. So that part of the campaign failed.

Are you going to work on bringing BWV 870—893 (Well-Tempered Clavier Book 2) to the audience next? Or would you explore something else?

Kimiko: My next project, which will be an audio-only project, will be to record the Chopin Préludes on a Pleyel piano that Chopin himself actually played.

It's an exciting project, especially when you realize that Chopin dedicated the Préludes to Camille Pleyel, piano maker and owner of Salle Pleyel in Paris, at the time Chopin lived there.

There will be a Kickstarter campaign for this project that launches very very soon!

The tech

A year and a half ago you said: "When we release the score, it will come together with a MuseScore 2.0 beta release". Looks like that worked. What was the most challenging part of this part of the adventure? Did you have to introduce more changes to MuseScore?

Thomas: When we branched off MuseScore 1.x and set the goals for 2.0 back in 2010, we quickly realised that the development of 2.0 would take a long time. As MuseScore is developed by a distributed team, working from home, we needed a concrete milestone to work for, which would in turn help us to keep up the good spirit.

Thus the idea was born of doing a Kickstarter and typeset a challenging piano piece. When sharing the idea with Robert and Kimiko, the synergy became clear rather quickly. As Kimiko was studying the Goldberg Variations at the time, Bach was the chosen one to be freed from copyright.

While the first Kickstarter was successfully finished at the beginning of 2013, the development of MuseScore 2.0 was far from over. Major parts in the code still needed to be refactored, partly because the MuseScore codebase needed to be prepared for mobile platforms. And thus by the end of 2013 we sat together again for a follow up Kickstarter, this time Bach's Well Tempered Clavier.

Team MuseScore

So from our side (MuseScore), the Kickstarter projects were very fun and rewarding, but all by all not that challenging. Instead, the real challenge was bringing the development of MuseScore 2.0 to a good ending.

First, many refactoring cycles took place, introducing several file format changes to extend MuseScore's feature set. Secondly the code base needed to be prepared for mobile platforms. So there was a real risk that we would dig ourselves into a deep rabbit hole.

MuseScore users liked the 1.x series because it was easy to use, yet powerful. But would 2.0 still live up to this?

Luckily we received continuous reassuring feedback from our contributor community, but it wasn't until outsider testimonies on the beta release started to come in that we knew that we nailed it. I'm really proud on this achievement even if it has taken us nearly 5 years. It is a giant leap forward for MuseScore.

Team Ishizaka

Did Olivier Miquel work with Kimiko, when he started using MuseScore 2.0 beta to typeset the score?

Robert: Kimiko was not very involved with this score, mostly due to her extreme focus on preparing for new performing projects. The editorial decisions are solely in Olivier's hands, with feedback from the public reviews.

Olivier: My main purpose was to give to the pianists and harpsichordists a clear, light, easy-to-read score, because Bach's music looks often difficult enough.

Using Musescore 2.0 for a good modern editing, I relied on the contemporary practice concerning the notation. I followed two historical manuscripts, the Agricola's (Berlin) and the Dresden's, and compared some edited versions. It is important to know there is not one definitive and indubitable copy of Bach's hand available.

Many well-known editions in the 19th and 20th centuries contain a lot of arbitrary signs for a piano performance. Sometimes there are important differences between the sources themselves and the main editions. The Bach Gesellschaft Edition indicates no less than 35 pages of alternative possibilities. A compromise had to be made.

How successful was community's collaboration on proofreading the first version of the OpenWTC score?

Thomas: For the OpenWTC edition, more than 100 annotations came in from 6 reviewers. Not as much as with the review of the Goldberg Variations, but back then we advertised it better among musicologists.

In any case, we believe that the traditional publishing process for paper distribution where you have a thorough Q&A process taking place before the print production, can be challenged in the era of digital distribution.

Just like users can continuously help to correct Google Maps, why not have the same solution in place for sheet music where users make corrections right in an app displaying the score? It will take some time though, but we'll get there eventually.

Apparently at some point in the past the OpenGolberg apps was removed from iTunes, and seems to never have been released for Android. What's the story here, and is making apps for each project like OpenGoldberg and OpenWTC something you want to do in the future?

Thomas: In parallel to the creation of the OGV edition, we were also trying to port the MuseScore codevbase to mobile platforms. In a first phase, this simply meant rendering a MuseScore file on an iPad as it was the dominant tablet back then. Developing for Android wasn't an option anyways, since the SDK was not ready for our requirements.

We managed to create a prototype app using the Goldbergs just a few weeks before the Kickstarter release date. Even though it was not part of the Kickstarter goals, we thought it would be a great surprise to deliver a dedicated OGV application, free for both backers and non-backers.

We hired a designer, pushed some late nighters and managed to get the app submitted for the Apple App store a week before the release. Unfortunately we missed to set a flag in the code and the software was rejected. A real bummer given that we missed the opportunity leverage the great exposure in press to push Bach all the way up in the app store ranking. A week later the application was released.

We wondered if there was an opportunity to create a dedicated app for the great classical works — the app as a sort of digital replacement for the album. But we figured this would go away from what we really wanted to achieve which is lowering the barrier to learn to play your favorite songs.

And as focus is important, we halted further development of dedicated apps for the Kickstarter projects. This also meant that when the OGV stopped working on iOS 8, we had to remove it from the app store. All focus has been going to the MuseScore application since then.

You didn't reach stretch goal to make Braille version of OpenWTC available. But in 2014, you did release Open Goldberg in Braille anyway thanks to RNIB's involvement. Could you tell a bit about your collaboration with that institute? What's the plan for OpenWTC?

Thomas: In the spring of 2013, we were approached by the people at the Royal Institute for the Blind based in London. RNIB helps visually impaired and blind musicians to obtain sheet music in the way they can read it, as a large print version or as a Braille score. To offer this service, RNIB in turn relies on hundreds of volunteers who use very expensive and proprietary notation software to transcribe music and create these special editions.

They wondered if MuseScore could help to make this process simpler and less expensive. As MuseScore is all about making music more accessible, we didn't think twice to take on this challenge. Yet, quite some development was needed.

The first project revolved around implementing RNIB's own invented standard named Modified Stave Notation into MuseScore, which is basically a set of engraving rules to create large print music which is easier to read by visually impaired musicians.

In a follow up project, the focus was on making MuseScore more accessible by making it work with the NVDA screen reader so that MuseScore could verbally read the music to you while navigating through the score.

So today with the Open Well-Tempered Clavier edition freely available for everyone, visually impaired musicians can open it in MuseScore 2.0, apply an MSN style to it, and create their own large print edition. Blind musicians can let MuseScore verbally read the score, or read the Well-Tempered Clavier through their braille terminals.

I'm demonstrating these innovations to the public for the first time at the UKAAF conference on March 20th in London.

Program of Libre Graphics Meeting 2015 conference is up

$
0
0

The program of the annual conference on libre software for creative professionals has been finalized and published. Over 50 developers and users of free software will deliver talks and lead workshops in Toronto on April 29 — May 2.

Although officially the topic is "Beyond the first decade" (the first LGM took place in 2006), this year, the conference has a slight bias towards type design and typography:

  • Peter Sikking (Man+Machine Works) will explain the user interaction design process behind Metapolator.
  • Shankari Priyadarshini Ravichandran (consultant for URW++, Monotype, and Google) will talk about challenges in creating open typefaces for complex scripts such as Tamil.
  • Open Source Publishing team will present their work on stroke fonts...
  • ...and there will be even more fonts and typography related talks.

Chris Murphy (Color Remedies) will summarize the state of color management implementation and then lead both a workshop and a meeting for people interested in the topic.

There's a short web design and programming track too:

  • Amelia Bellamy-Royds (W3C) will introduces key concepts and methods behind accessibility on the web, with a focus on SVG.
  • Carl Chouinard (The Grid) will explain how use comparatively more expressive Grid Style Sheets language.
  • Eric Schrijver will give you a quick tour into creating realtime collaborative web applications with Derby.js and Meteor.js.

The topic of fashion design with open tools will be discussed again with Hong Phuc Dang, founder and organizer of FOSSASIA, although, surprisingly enough, she's not planning to cover Valentina, quite possible the only free pattern-making app with GUI out there.

The 3D/VFX track is not particularly strong this year, and yet you will have a unique chance of meeting Alexandre Gauthier-Foichat, one of the two principal developers of Natron. Alexandre will demonstrate how to use Blender render layers in Natron to produce stunning visual effects.

Jean-François Fortin Tam will talk about PiTiVi, the free/libre non-linear video editor, project management in FOSS project, economics of free software development etc. Additionally, Canadian filmmaker Ben Sainsbury et al. will introduce you to FilmTIME, a simple open source interface/tool for film directors to express creative vision.

For more details please visit the program's page. The team is asking attendees to register for the conference in advance.

The event is free to visit, however you are encouraged to donate to cover travel expenses of people who will give talks, as well as developers who use LGM as a venue for project meetings.

MuseScore 2.0 brings better music notation, improved usability

$
0
0

Much anticipated major update of MuseScore brings score layout improvements, linked parts, guitar fret diagrams and tabulatures, the importing of Guitar Pro files, and more exciting new features.

For a more or less complete list of changes please read release notes or just download and make your own judgement. There's also a great video from George Hess with highlights of the new release:

A week ago, LGW already interviewed some of the MuseScore developers, when Open Well-Tempered Clavier project was released, but there is just so much to discuss when it comes to the score editor. Thomas Bonte and Nicolas Froment kindly replied all the questions.

After this much rewriting MuseScore, are you happy with the outcome, or do you still see some flaws in how MuseScore works internally and/or with regards to UI?

Nicolas: We are very happy with the outcome. The core of MuseScore, a single code base, is running on the 5 major OSes of the moment. But of course, being perfectionists, we do think there is still room for improvements.

If I would have to pick a single one, it would be layout speed. Currently MuseScore will relayout the full score for any edit, we could optimize this.

My impression from our past conversation several years ago was that you didn't intend to push MuseScore to take on Sibelius or Finale. Back then, the app was often compared to Finale Notepad. Where would you position MuseScore today?

Nicolas: In term of features, we believe MuseScore covers the vast majority of the engraving features of the two software packages you mention, and these features are probably the most used ones. MuseScore also provides some other features that they do not provide (import of several formats for example). MuseScore’s goal remains to provide an affordable, easy to use music notation software which creates beautiful sheet music.

MuseScore used to be criticized for score output quality. However, things seem to have improved with 2.0. What’s your approach to working on this? Do you mostly rely on input from users or do you run various continuous comparative tests?

Nicolas: The approach is the same than for any other open source software. Something looks wrong or broken, and a developer thinks he can fix it, so he scratches his own itch. It happened repeatedly to Marc Sabatella during the development period of MuseScore, and so he worked on handling second intervals in multivoice context, stacking accidentals, beam groups etc.

During the last development cycle, Elaine Gould published the “Behind Bars” book which is now recognized as a reference for music notation. The book helped us a lot to make things right.

Werner Schweer reading the Behind Bars book

Of course, music notation is far from an exact science even with a reference book, and sometimes we needed some discussion to find a compromise. When a decision is made and implemented, we use visual tests to make sure there are no regressions.

One notable change in 2.0 is Zerberus — a new built-in SFZ sampler. For the older sampler, you are still shipping an SF2 soundfont that is better, but still somewhat simplistic. Should we expect you to continue improving built-in playback with regards to output quality? Maybe do a kickstarter to create a really high-quality soundfont with instruments recorded all in one place by the same engineer (that is, unlike the Sonatina SFZ frankensamples)?

Nicolas: MuseScore 2.0 comes with a new soundfont, derived from FluidR3GM, which was the default on Ubuntu, but not in most other distributions and on Mac and Windows. For those used to the old soundfont, the quality will increase dramatically already in 2.0. The SFZ sampler, Zerberus, has been implemented with the Salamander Piano soundfont in mind. For piano scores, it gives a very good result. Check this page for a comparison.

We would love to work on a better playback in future versions, but we need to keep in mind that MuseScore is first and foremost about notating music. Any playback improvement shouldn’t go in the way of easily creating beautiful sheet music.

We hope to be able to improve the internal synthesizers and to offer an even better connection to JACK for more demanding users. Some developers have show interest on working on VSTi support, but so far no work started, and the license status of such an implementation, VST being a proprietary standard, is still unclear.

The idea of a kickstarter to create samples and soundfont has been raised several times. So far nobody took this challenge on to make it reality.

How much and in what ways do you interact with educational institutions today?

Nicolas: We interact with educators and institutes via the same channels as with use with our users, that is email and via the MuseScore forum. Many schools are using MuseScore and we mostly learn about this fact via a student’s tweet or support request in the MuseScore forum.

School network admins reach out to us to get help deploying MuseScore on Windows network. It’s one of the reason why MuseScore on Windows is delivered as a MSI, to ease this deployment.

Occasionally, we go out and promote MuseScore at music conservatories and we give workshops. Our last workshop in Feb this year was given at the Royal Academy of Music in London.

Currently MuseScore focuses on contemporary notation system. But wasn't there a project to bring Chinese traditional score support to MuseScore? And then there's the whole community around early music, partially served by Manuel Op de Coul's Scala software. Do you see MuseScore adopting more notation systems and tunings/temperaments in the future? How much of a priority would it be?

Nicolas: A Chinese user started an initiative to implement Jianpu in MuseScore, but as far I know no code was written.

Regarding early music, there are some features in MuseScore 2.0 like ambitus, cross measure tied notes, figured bass. In MuseScore 1.3, there was a plugin to read Scala temperament files and apply them to the score. It’s possible since each note in MuseScore can be tuned in 1/100 of cents. I’m sure this plugin will be ported to 2.0 one day.

We also got several requests to incorporate Gregorian Chant in MuseScore, something that the Gregorio Project tackles specifically.

Craig Fisher is working on creating a generic approach to incorporate several alternative music notation systems (more than 5 staff lines, no accidentals, pitched note head etc.) and would like us to incorporate it in MuseScore in the future.

So sure, the demand is here. But again, MuseScore goal is to remain easy to use and to have a broad audience. Not necessarily to attract users with very specific use cases, especially if they are served by other open source software (Lilypond, Gregorio). So we would add more features only if it doesn’t go in the way of a teacher willing to create a simple exercise.

Adding support for tabulatures and importing Guitar Pro files means that MuseScore now partially replaces (or complements — there's more than one way of looking at it) Tuxguitar for those who have been waiting for its updates for over 5 years. Do you expect it would be sensible to keep MuseScore generic for guitar players and rather see current low-profile development of Tuxguitar to flourish, or do you think there's a place in MuseScore for more instrument-specific features?

Nicolas: There is definitely a place for more instrument specific features, and especially for one of the most used instrument in the world. The support of tablature in MuseScore 2.0 is quite extensive, but we know we are lacking some features such as harmonic playback or hold bend. These features would be welcome.

Guitar tabs in MuseScore 2.0

Regarding Tuxguitar, it’s a fine piece of software, we used it a lot to debug the Guitar Pro import. I hope someone will resurrect it one day.

During v2.0 development cycle, you adopted Bravura open font and the SMuFL initiative by Steinberg. Have you seen much benefit from this yet, or do you expect to get a better understanding how it affects your user base now that v2.0 stable is out in the wild?

Nicolas: The previous version of MuseScore used a single music font. There was no way to change this font. SMuFL defines a code point for thousands of music symbols together with best practices to create fonts, to define metrics etc… This is incredibly useful.

Once we modified MuseScore to follow SMuFL and integrated Bravura, it was very easy to add support for Gootville, the third font available in MuseScore 2.0. It would be equally easy to add more fonts.

Reunion by Marc Sabatella rendered with Bravura

The missing feature is the ability to use an external SMuFL font. If we add this feature in MuseScore, score files will not be portable anymore since the recipient of the score will need to have the same font. The score font being central in the drawing process, the score would look very different if the font is not present.

We will see if the ability to use more than three fonts provided by MuseScore is a popular feature request.

Earlier you established a plugins system to enhance MuseScore’s feature set by getting 3rd party developers involved. Currently plugins need to be manually downloaded, installed, and updated. Some of the plugins only work on Linux, and, as you mentioned, some of the plugins need porting to work in newer revisions of MuseScore. Do you have a plan to simplify this?

Nicolas: For MuseScore 1.x, there are over 80 plugins developed, and indeed they need to be downloaded and installed manually. Sometimes the installation is not trivial and requires admin privilege.

In MuseScore 2.0, we don’t have yet an integrated plugin manager, but some work has been done to make the installation easier. First, there is a way to install plugins in a user directory, and then we have a system to update languages, developed by a GSoC student, which is flexible enough to handle plugins. So, a connected plugin manager could happen in the future.

MuseScore 1.x had a plugin system based on QtScript. In order to create UI, plugin developers could use Qt ui files. This system imposed that we built a C++ bindings for all the Qt classes we wanted to use. This is similar to the plugin framework of Amarok 2.x. For MuseScore 1.3, this binding took 60% of the building time, and so it was a drag. Also, the binding generator has not been ported to Qt5, and QtScript will be deprecated in Qt 5.5 apparently. So we needed another solution.

MuseScore 2.0 scripting framework is based on QML, and so UI can be designed with QML as well. The integration with the core is a bit less painful than with QtScript and so the framework exposes a lot more elements. However, the plugin framework in 2.0 is far from perfect, in particular creating elements is not very developer-friendly, and it would need more contributors.

MuseScore source code evolved/evolves organically and so it's pretty hard to define a non moving API for Plugins. If this doesn't exist, it makes less sense to create an easy way to download/update plugins.

Your recent collaboration with Royal Institute for the Blind in London resulted in creating tools to make more sheet music accessible to visually impaired and blind musicians via MuseScore, as well as providing support for the free NVDA screen reader. Last Friday you participated in UKAAF conference to demo these new features. What feedback did you get?

Thomas: Our collaboration with RNIB started in 2013 when they reached out to us with the request if MuseScore could create Modified Stave Notation (MSN) scores. MSN is basically a set of engraving rules which MuseScore users can follow to create sheet music which is easier to read for visually impaired musicians. Enlarging the normal sheet music offered through retail simply does not solve the readability problems. MuseScore 2.0 has now complete MSN coverage and it’s the only package on the market offering this.

Large print of the score with MuseScore

The collaboration continued in 2014 with improving accessibility in MuseScore. As this is a real challenge for graphical software such as MuseScore, we limited to scope to the case of verbally reading scores by integrating with the open source NVDA screen reader. And with great results as now the MuseScore menu, most popup windows, the palettes and the actual score view are accessible. This means MuseScore can verbally read all the elements in the score for you via NVDA.

The first feedback on the accessibility came in early January this year. First through James Risdon, music officer at RNIB, followed up by external testers. We learned that technically everything was working, but there were two main aspects to improve on:

  1. The score reading is too verbose as it reads all what is printed in the status bar.
  2. The use of shortcuts to navigate through the scores and reach the elements is too confusing.

Both issues are currently being tackled.

As for the real feedback from the attendees of the UKAAF conference, well it’s still too early. MuseScore 2.0 was released just recently and it’s with that release that we expected some more external feedback. In any case, needless to say that being able to verbally read the score is a serious milestone as this was not possible out of the box with no other notation software. So in general MuseScore is leading the way on making scores more accessible.

One notable trend in comments about MuseScore mobile apps is that people expect to be able to use it to compose music, not just read and playback score. Is this something you intend to implement? Would that require smth like iRig Keys support?

Thomas: We did our first baby steps on mobile in the past two years with a player app. We needed to learn how this all works, how MuseScore core could be ported on mobile platforms. We really felt that tablets are the perfect tool to learn new songs, rehearse and perform but less to compose or notate.

MuseScore Songbook on Android

When we released the MuseScore apps, we anticipated that quite a lot of the MuseScore users would expect it to be an editor. For that reason, we originally named the app “MuseScore Player” to set the expectations.

Also, we made it clear in our announcement that with this app we wanted to focus on music learning and practicing first. After all, there are quite a lot of people using the MuseScore desktop application for that reason only, so not to notate but to learn.

With the many requests though we have been tempted to test a few things with score editing on touch devices, but we decided to hold off and learn more first about developing for touch. We nicely replied each request that one day we’ll get to it, but first things first.

Adding MIDI support to work with iRig Keys, both in and out, is obviously something we could do as well. There is however no turn key solution available in Qt so we would have to build it ourselves.

With musescore.com (seemingly) fully operational, and MuseScore Songbook app for Android/iOS available for a fee, would you say that the project is financially sustainable at this point? What do you tink should/could be improved? Where would you take MuseScore further in that regard?

Thomas: We launched musescore.com by the end of 2011 and the mobile apps for Android/iOS were released in May 2014. Our goal is to make a vertically integrated platform for sheet music, so providing tools and services for producing, distributing and consuming notated music. That’s all economical lingo just to say that we want to help people to learn to play their favorite songs on their instrument.

One of the major struggles aspiring musicians face is finding the right piece of sheet music which fits their needs. The established sheet music companies have not been able to fill in this demand because their business model is based on scarcity. This is exactly the problem we’d love to fix with MuseScore. So with that ambition made clear, it’s fair to say we are a long time away from being sustainable.

Symmetry painting mode lands to unstable GIMP

$
0
0

Earlier this week, Jehan Pagès delivered on his promise to create symmetry painting mode for painting tools in GIMP. The code is now available in a dedicated development branch to be later merged for v2.10.

The new mode is supported by all brush-based tools (Paintbrush, Pencil, Eraser, Smudge, Clone etc.), as well as by the Ink tool. Jehan also posted a video demonstration of the current implementation:

Symmetry mode for painting tools is the second community-sponsored development project for GIMP endorsed by the GIMP team (the first being improved interpolation methods). The project wasn't 100% funded until several months ago, so Jehan spent most of the time doing what he had been doing for GIMP before: fixing bugs.

As you can see, the current implementation goes beyond the original proposal: there are multiple kinds of symmetry, and they are all configurable. Moreover, Jehan is considering an implementation of pluggable symmetries, although don't take it for a promise just yet.

Despite hard evidence a lot of people still treat GIMP as a generic image editor rather than something suitable for digital painting. So the new feature could be another reminder that there is more to GIMP than cropping, retouching, and color grading.

There are no known builds of this development branch yet. If you are curious to try the symmetry mode for painting tools, you need to build babl and GEGL from Git master, then clone Git repository of GIMP and checkout the 'multi-stroke' branch, then build it. Or you could wait till the new feature ends up in the main development branch of GIMP (apparently, soon) and thus becomes available in nightly builds for Windows and Ubuntu, as well as in builds at partha.com.

Additionally, if you've been wondering what the GIMP team has been busy with lately, here's an extensive 2014 report.


Free pattern making software Valentina 0.3 released

$
0
0

New version of Valentina, free/libre pattern making software for fashion designers features improved output for cutting and design tools enhancements.

We already introduced you to this project about a year ago, so for the background of the project you can check this article.

The major new feature in this release is automatic layout of patterns for printing (e.g. on a plotter). Here's a quick demo from Roman Telezhinskyi, project's lead developer:

Most of the other changes are fixes and improvements, such as newly supported folding lines in patterns. However, if you missed the previous release from January 1, it's choke full of new features: undo/redo system, support for individual measurements, support for cm, mm, and inches in patterns, new drawing/construction tools and improvements in existing ones, and the list goes on (and on, and on).

A pattern in the making

The project is now well into its second year of existence, and, interestingly enough, for Roman it's very nearly a full-time employment, even though a public fundraiser at OpenFunding failed last year. Susan Spencer, community manager of the project, explains:

We didn't yet have a community to support a funding effort. When we're at the point to develop the 3D patterns, then a fundraising effort will be very effective. For now, Roman has some private sponsoring which keeps him occupied.

As for 2D-to-3D workflow, Valentina already supports basic exporting of Wavefront OBJ files. Going further is likely to be a challenging, yet quite rewarding enterprise. Imagine previewing the clothes in MakeHuman on a model that uses body measurements you made with a tape measure, or using clothes simulation in Blender.

Another planned huge change is support for a library of patterns which should make Valentina more suitable for mass production of clothes. This might have to wait till v1.0, when design tools and overall workflow are rock-solid.

Of course, Valentina still has some design issues, and there's also the cultural aspect of body measurements naming. Over to Susan again:

Names of body measurements used in patternmaking are based on idioms and are difficult to remember and select in a picklist. We've worked out a naming convention scheme which makes more sense and lends itself to translation between languages.

We've reviewed many different patternmaking systems to ensure that we have included the measurements required for each system. And we've created graphics to let users look up the measurement, which should also assist with translations. The new measurement system will be available in the next release, v0.4.0.

Some of the other changes you can expect in the next releases of Valentina are printing a large images across multiple smaller pages, variable seam allowance widths around a pattern piece, support for internal paths such as contour darts and pockets, etc.

The team provides builds of Valentina for Windows, Linux (Ubuntu, Fedora, SUSE), and OSX. Source code is available under terms of GPLv3+.

FreeCAD 0.15 released, team explores community funding

$
0
0

New version of FreeCAD features a great variety of improvements in many workbenches, as well as fancy new things like Occulus Rift support.

Model by Michal Gajdoš

What's new

The Arch workbench is probably the most feature-packed one this time:

  • IFC filter is based on the most recent IfcOpenShell library, it's faster and supports the exporting of IFC files now, thus making FreeCAD two-way compatible in BIM workflows.
  • Cutting objects with planes is possible now.
  • New Roof tool provides more control over roof features such as thickness, length of the overflow etc.
  • New Panel object allows creating all sorts of panel-like objects.
  • New Arch Equipment object makes it possible to add lighting appliances, sanitary equipments, furniture and all kinds of interior design objects.

By the way, Yorik van Havre who's behind this workbench, gave a talk about FreeCAD earlier this year at FOSDEM:

The Sketcher finally got more complete support for ellipses which means that ellipses can be contrcuted in different ways and used in further operations.

Some of the other new features in Sketcher are tools to diagnose and fix issue sin sketches, such as fiding conflicting or redundant constraints.

The Spreadsheet workbench was rewritten and is also far more feature-complete now, with improved data retrieval from your models.

For more details please read the release notes for FreeCAD 0.15. Builds are available for Linux, Windows, and OSX.

Additionally, if you are fine with silent tutorial videos, give a shot to cad1919's YouTube channel. He's been producing FreeCAD videos hand over fist.

What's next?

Some of the most anticipated new developments are Assembly, CAM, and FEM workbenches.

Assembly seems to be at the work-in-progress stage since dawn of times, but lately it got a lot of attention from developers, and people have been posting screenshots of successful projects for the past several months, like this one from Franklin Vivanco:

Path workbench

The new Path workbench introduces importing, editing, and generating GCode. Editing operations currently include profiling, pocketing, drilling, making compounds and more. The workbench is well-documented on GitHub.

Path workbench

The finite element analysis workbench has been getting a lot of contributions in the past few weeks from Przemo Firszt and Bernd Hahnebach. It mostly contains cleanups and small improvements, but the amount of work is quite impressive.

Paid development

The upcoming Path workbench was the first experiment with community funding of FreeCAD development. The new module was created by Yorik van Havre, Daniel Falck, and Brad Collette. The funding process was rather informal, Yorik is going to blog about it soon.

Next, a month ago, Ian Rees, one of former contributors to the project, asked the community if they would be willing to financially support his work on FreeCAD while he's between two jobs. His last job was maintaining IceCube South Pole neutrino observatory during the winter when there's no plane access (talk about badass engineers in the community), his next one is undecided yet.

In our conversation Ian confided that he considers himself as a technical generalist. This puts him right in the middle between the customers group (mechanical engineers) and the creators group (programmers). Thus he can both understand requests in users' language and write useful code.

Most recently he added isometric projections, fixed orthographic projections, improved performance during the loading of FreeCAD files and the Drawing module etc.

Isometric Projections in Drawing workbench

Interestingly, Ian doesn't do bounties:

With this experiment, I'm trying pretty hard to stay away from the idea that financial backers support specific tasks. I am saying "for X money, I'll work on FreeCAD for Y time", not "for X money, I'll work on feature Z", and with the blog I think people will get an idea of what I'm doing and whether they want to support it.

When someone sends me a donation, I do ask them for input so I can gauge what others want to get done, but I balance that input with a bigger picture idea of where I'd like to see FreeCAD go. I think there's a lot of less-exciting work that needs to get done on the internals, without adding many real "features", and I don't think the average supporter gets too excited about that.

You can follow Ian's progress by reading his blog. And deciding, whether to support his work, is ultimately up to you.

That said, much like GIMP, the core team of FreeCAD (with notable exception of Yorik) isn't doing paid development yet. However they do appear to be open to personal fundraisers.

The future of Audacity, interview with the team

$
0
0

It seems, these days every other major free/libre media production tool is undergoing dramatic changes that promise richer feature set, better usability, and, generally, more power to users. Audacity is one of them.

Originally developed by Dominic Mazzoni and Roger Dannenberg, Audacity has been with us for the past 16 years. By now, there's probably a whole generation of people doing things with sound and using Audacity as the go-to application for simple recording, editing, and mixing audio, as well as for completely uncommon projects such as making 3D jewelry out of waveforms.

However, like other high-profile free software, the project appears to be torn between an insane amount of feature requests. Some of them have already been addressed with two latest releases that introduced real-time FX preview for LADSPA/VST/AU plugins, support for LV2 plugins, and basic spectral editing.

Modern Spacer Black VST plugin running in Audacity with real time preview

Modern Spacer Black VST plugin running in Audacity with real time preview

But there are far more requests: contemporary user interface, non-destructive effects and automation, better support for various plugin APIs, complete MIDI workflow etc. So LGW sat down with the team to talk about development priorities and the outlook for the future of the project.

Q: You recently released v2.1.0 with major changes such as real-time preview for effects and spectral selection/editing. Congratulations! Now that it's out, what's the next thing to occupy your time?

James Crook: I expect us to be putting more developer time into quality, but in a smarter way:

  • Tests with each of our recently automated build-on-commits that go beyond pass/fail and monitor performance and our memory/CPU headroom.
  • Low overhead 'countdown' logging so we can log anything we think might help. I intend this to help us track down some glitches that should not happen.
  • Enhancements to scripting to automatically collect/update all the screenshots for the manual.

The screenshot script is for documentation, but of course will be giving Audacity quite a good workout too.

Q: There's still a major gap in crossplatform free/libre software when it comes to an easy-to-use digital audio workstation like Apple's GarageBand. Various existing projects are either inactive (Jokosher), Linux and JACK-only (Qtractor, MusE, Rosegarden, NON-*), EDM-oriented (LMMS), or just commonly considered too complicated for beginners (Ardour). Do you see Audacity filling that void for "bedroom musicians"?

James Crook: Audacity has, in my view, become too hard to use. We need a much simpler mode for it that at the same time does not 'sandbox' you away from the more advanced features. That's a big GUI design challenge rather than just a programming challenge.

I think Julian Dorn and Leon Schlechtriem have some very good thoughts on that with their dedicated recording mode:

Q: MIDI features in Audacity are still basic, and proposed musical time in the timeline hasn't been implemented yet either. Is it about project vision not involving MIDI much, some sort of technical limitations, or the lack of contributors?

James Crook: What gets developed depends on people's interests and time, and MIDI is unfinished indeed. Yes, we are all pulling in slightly different directions. As a group, improving real-time is much higher priority for us than MIDI. But we do want MIDI, for reasons beyond using it for composing.

Both MIDI and RT will benefit from pluggable track types, and that is where there is more activity.

Q: About that activity. What are the most exciting features in the works lately?

James Crook: Last year we did Audacity Unconference in Preston, organized by Martyn Shaw, where we demoed radically transformed user interface, converting hand claps to notes (MIDI and wave), a minimally editable score track (musical notation), the RT preview that is now in 2.1.0, an RT effects dock, and automation curves. Not all demos we make will make it into production, but there is exciting stuff in the works.

Q: Adding real-time effects dock and automation would involve a major rewrite of the audio engine (not to mention redesigning the UI), something like what Joshua Haberman started years ago with the Mezzo project, right?

James Crook: I can only partly agree. The FX dock demo was based on what is now 2.1.0 code, so the 2.1.0 audio path supports it. Leland has put down a lot of the foundations for full real-time by spring boarding from cross platform work by GStreamer.

The automation curves were demoed on new audio code with micro-fades that rejoins Audacity at PortAudio. We are making changes in mainstream Audacity audio path based on experience with it. One of those changes will be in 2.1.1.

For both these demos GUI is currently the real barrier to that feature being ready. There will also be work to get the built in effects real-time, as each one will need to be visited.

Joshua's Mezzo initiative was very focused on the audio engine. We do need a much cleaner API between the audio engine and the GUI — and that is where Mezzo was heading. We also need other structural changes even more. If we don't think these things through carefully and prototype, then we are writing 'the same code' over and over in the GUI in slightly different disguises.

Much of the Audacity specific code that we still have to write for these features is GUI code. The demo code helps us work out what structural changes to make both in GUI and audio API.

Q: But you don't talk about these work-in-progress projects much, do you?

James Crook: It would be very irresponsible to get end-users' hopes up based on these early demos. There is though more happening, more new activity, than you see in the main git repo.

Like the current MIDI code, and like Mezzo, there is no guarantee work in progress will ever make it into released code, or that if it does that it will be any time soon.

Q: Speaking of the user interface, Audacity is both praised and criticised for its UI, its branding etc. The team used to be somewhat wary of radical UI changes. Later you added and then, apparently, removed the ability to make skins (or, rather, color themes) for Audacity. Finally, since last year or so, you've been posting UI and logo proposals from users on your Google+ page and collecting input. Is there a change of heart? Are we going to see redesigned user interface and updated branding?

Steve Daulton: I'm very keen to promote engagement and contributions for Audacity beyond coding. Developing a major project such as Audacity requires many types of skills and contributions, and is not limited to computer programmers (though as a software project, high quality code is obviously important). Writers, graphic artists, musicians, translators, VO artists, accessibility specialists... All may make valuable contributions.

UI proposal by Lucas Romero Di Benedetto

UI proposal by Lucas Romero Di Benedetto

Vaughan Johnson: Additionally, in 2014, we worked with Intel on prototyping a touch version of Audacity. I'm trying to get back to that project, now that we released Audacity 2.1.

Audacity with touch interface, picture courtesy by Intel

Audacity with touch interface, picture courtesy by Intel

Q: Since its inception, Audacity has been developed in a somewhat generic fashion, which is why it got adopted by a great variety of users. It got Nyquist scripting early on to simplify writing new features, and there have been at least half a dozen of friendly forks (mostly by team members like Vaughan) to customize it for various purposes. Would you say that Audacity today is truly modular and extensible, or do you see ways to improve the state of affairs? How?

James Crook: No. Audacity modularity is minimal as yet. We only have the basics. We are making slow progress though. As mentioned before, we are working on pluggable track types so that we have more modularity in the GUI.

I view Nyquist in Audacity as 'a secret weapon' that few people really know about, analogous to having Elisp in Emacs. My impression is that no one is using it to its potential in Audacity. The more involved work using Nyquist seem to be in the standalone version of it. New features like SAL land there first.

Nyquist isn't as integral and central to operation of Audacity as Elisp is to Emacs. As yet, Nyquist in Audacity has knowledge only of the audio and not of the GUI. To extend Nyquist properly we need to tell it about the GUI and to be able to plug new GUI elements in.

Q: One of annoyances users have with Audacity is its overly long Effect menu — whenever too many plugins are installed and discovered. Years ago effects taxonomy was introduced to make it possible choosing FX based on category they belong to (reverbs, compressors etc.). It was later removed for technical reasons. Today, Audacity still separates internal effects from pluggable ones and breaks external ones into numbered submenus. Do you envision a way forward with this?

James Crook: Yes, we have already done some preliminary design work on that.

Q: Stats at OpenHub give an (admittedly, questionable) impression that the team is getting smaller in terms of code contributions, and there's a huge difference in activity even between TOP5 committers. Would you say you are growing or shrinking as a developers team?

Steve Daulton: Take a large bucket of salt. The stats on OpenHub were frozen for nearly 4 months and the last time I looked the stats were over a month out of date. I don't mean to criticise OpenHub, I think they do a great job overall, I'm just pointing out that such stats are not at all reliable for fine grained analysis.

Vaughan Johnson: Yes, OpenHub looks only at code contributions. E.g. Leland always does a lot of commits, sort of "agile"-style, so he gets a very high commit count. I'm okay with that measure, but I think it's not always representative of actual overall contribution. Line count has also been shown to be a very questionable measure, for many years.

Audacity team is actually growing, e.g., we just added Paul Licameli and encouraged him to add code by giving him commit privileges. James has committed Paul's contributions prior to us giving Paul commit privileges, so it looks like James is contributing those, but they're actually Paul's. James has made his own contributions, too, recently — I'm just saying it's a misperception that Audacity team is shrinking.

Besides, I and others have been putting in a lot of work that doesn't register on OpenHub — website files/updates, builds, releases etc. — things that OpenHub ignores.

Q: Is there a particular line of work on Audacity that you need help with the most? Something that, once completed, would move the project light years ahead?

James Crook: People should do what they personally care about. That's where they will make the most difference. I love the ways that Audacity is already being used in education. Vi Hart did a lovely video explaining overtones using Audacity.

The maths in audio programming ranges from straightforward (amplify is just multiplication) to the diabolically subtle. The hard maths is the biggest most difficult barrier to more developers writing audio code. It's worth tackling head on.

This is the right time to build the FLOSS audio developer community and bring more people in. Done right the hard maths can be understandable and satisfying. Likewise the programming that follows from it.

So I am repurposing convoluted content from Wikipedia and mining existing code, working with others to comb out the tangled explanations, trying to make a new really beautiful and wide on ramp for audio programming from the very earliest stages on.

I'd love more help. There's challenges of all kinds in it. It's not to put just Audacity light years ahead.

Q: What do you see as the most challenging tasks for the project in the foreseeable future — feature-wise, organization-wise etc.?

Steve Daulton: Difficult to put a finger on any one thing as there is so much going on, and different areas require different priorities.

For the documentation crew the major challenge is to continue to provide high quality documentation for a project that is progressing at a rate of knots.

For the user support team it is to provide high quality support for an ever increasing user base. It is the continuing "challenge" that drew me to Audacity (and no doubt the same for other contributors) — we don't choose to do things, because they are easy, but because they pose a challenge and personal satisfaction when we are able to rise to those challenges.

James Crook: I think, keeping the project fun is the number one challenge for us. We are all volunteers. As code gets bigger, it is harder for an individual to have a big visible impact. That could tend to make it less fun.

A bigger mature project can make development, particularly the "fixing other people's bugs" more like work than a hobby. We are doing pretty well at fun and impact. AU14 was fun. Both Leland's and Paul's changes in 2.1.0 have big visible impact.

We're working on ways to make the code smaller, less work to bug fix, and related things to keep the project fun.

Ardour 4 brings sleeker UI, new editing tools, native CoreAudio/ASIO support

$
0
0

Paul Davis et al. released a much anticipated major update of Ardour, free digital audio workstations for Linux, Mac, and, for the first time, Windows.

What’s new

Updated user interface

With this release, Ardour begins moving away from GTK+ to embrace Cairo, a state-of-the-art library for drawing 2D graphics on display. The benefit of using Cairo is that it greatly simplifies making sleek, appealing interfaces.

SCREENSHOT

For now, Ardour still uses GTK+ for laying out user interface elements, for the file browser, and a few other things. However, all of the editor’s window, all the faders, meters, buttons, dropdown lists — pretty much all widgets are Cairo-based now.

Additionally the color scheme for tracks and regions was rewritten, so now, as you can see, by default you get way cleaner background colors for waveforms.

SCREENSHOT

Note that this is only the beginning; a lot more “sights for sore eyes” will come in due time.

JACK, CoreAudio, ASIO

At some point in the past Paul Davis changed his opinion about the architecture of professional audio on Linux, and how JACK should expose itself to users. In 2008, he wrote this in a reply to a user at Gearslutz:

There will be a version of Ardour in the future that merges JACK into Ardour itself, so that there is no reason to think about running a separate program at all, and we will likely attempt to even hide the possibility for inter-application audio routing from this version. I think its a shame to limit one’s possibilities in this way, but given that Ardour is a user-driven effort, I imagine it will happen anyway.

In a way, it just has.

First of all, Ardour 4 no longer depends on JACK on every platform it runs on.

Are you a Windows user? You can choose between ASIO and the Windows port of JACK. Are you on Mac? CoreAudio or JACK for OSX — make your choice. Are you a Linux user? ALSA or JACK. Seriously.

For all these options you retain at least basic hardware connectivity for both audio and MIDI I/O.

SCREENSHOT

The Audio/MIDI Setup dialog has been enhanced accordingly. E.g. you can now calibrate and set per-port MIDI latency.

MIDI editing

Improvements in working on MIDI tracks and regions have been scattered all over the user interface. But what it actually means is that you are getting an overall better experience working with MIDI in Ardour.

One such example is that MIDI signal now flows over the entire chain of processors. Robin Gareus added this tweak to circumvent limitations of the linear signal flow in Ardour’s plugin chain and make it possible to feed a plugin’s MIDI output back to a controller.

SCREENSHOT

That way, you can pick a different preset for e.g. setBfree organ simulator, and your MIDI controller will be sent a CC message to adjust the positions of motorized controls accordingly.

There’s also a working solution for merging MIDI regions via bouncing MIDI. Ardour doesn’t do simple joining of MIDI regions, because this cannot be reliably done in a non-destructive manner, but you can bounce and use the resulted cumulative MIDI region.

Removing gaps between adjacent notes is now possible thanks to Legatize command available in the right-click menu when at least two notes are selected.

SCREENSHOT

A new Transform dialog provides ways to make time-based transformations (e.g. velocity crescendos) of note properties such as velocity, length etc. It’s a quite versatile tool.

SCREENSHOT

Fixing the way automation for sustain pedal is handled by Ardour eventually has led to a general improvement in the MIDI automation department. Whether you do any serious virtual piano playing or just use MIDI a lot, this will be extremely helpful. A lot of work here was done by David Robillard.

General editing and control improvements

One of the nice features backported from upcoming Mixbus 3 (commercial fork of Ardour), is the ripple edit mode. With this mode, everything you do with a region or a selection affects whatever data is to the right of that region/selection.

Say, if you delete a section, everything to the right will shift to the left by exactly how much you deleted. If you push a region forward or backward, everything to the right of it will move accordingly.

SCREENSHOT

Another handy little feature is sequencing regions. Instead of manually aligning multiple regions in a track so that they had zero gap between them, use Region -> Position -> Sequence Regions.

SCREENSHOT

Pressing Shift will limit direction of moving a region. This is helpful, when you have a lot of tracks in the current view, and you don’t want to

Yet another cool little feature is session locking, which came from Tracks Live. If there’s a slightest chance someone can inadvertently knock over your workstation’s keyboard or tap-dance on your laptop and mess the ongoing recording, use File -> Lock command. This will lock all access to the window except the Unlock button. Suck it up, cats!

SCREENSHOT

There are also several improvements in transport and control department:

  • You can configure Loop to become a playback mode rather than a separate command.
  • You can tap tempo now. Right-click on the timeline ruler to create new tempo marker, then in the newly opened dialog continuously click the “Tap tempo” button to set the new value.

Finally, QCon controller and original Mackie Control device support was added.

Reliability

As much as users adore useful new features, one can’t deny the feeling of a deep relief when your “work horse” application reduces memory usage, especially if we are talking about -80% at startup time. Ardour 4 does just that.

The new version also correctly reads the amount of open files it can realistically handle and gives you a warning sign, when you are trying to work on a large session that your computer is likely to choke on.

For a more complete list of new features and improvements please refer to official release notes.

Financials and sustainability

Paul Davis has been relying on subscriptions for supporting Ardour since early 2007. The initiative was looking promising: in February 2007, 211 people donated $8000 to support the project, which gives an average of $38 per donation.

Since beginning of 2014, donations to keep the project afloat have be on steady decline. This is partly due to PayPal fiasco, but if you look at the current stats, you’ll see that (at the moment of publishing this) 559 people have donated $2135 for the next month of development, and while there are still nearly two weeks ahead, this gives us an average of mere $3,8 per donation.

Simply put, while the user base expanded, the average donation dropped to 1/10th of its original bar in the past 8 years.

Between 2001 and 2015, the project was sponsored through consulting and development contracts by SAE Institute, Solid State Logic, Harrison, and Waves. For now, the “underfunding” is more or less compensated by the consulting work that Paul has been doing for Waves who used Ardour as the foundation for Tracks Live, a proprietary DAW of their own.

A new major Ardour release, as well as cautiously providing Windows builds, may or may not fix the situation, but it looks like the project will need some new approach to stabilize cash flow and thus keep Paul focused on Ardour rather than devote his time to friendly forks.

The status of the Windows port

The origins of Ardour for Windows go back to a Google Summer of Code 2006 project by Tim Mayberry. His work was later picked by John Emmas of Harrison to make Mixbus available on Windows. At that point, Ardour/Mixbus were still relying on JACK for hardware/software connectivity.

Two years ago another commercial entity, none other than Waves Ltd., approached Paul Davis about creating a new DAW on top of Ardour’s source code. Grygorii Zharun and Valeriy Khaminsky wrote a lot of code to get Ardour to use ASIO via PortAudio, and a lot of help came from both Tim and John, with assistance from Robin Gareus and Paul Davis.

SCREENSHOT

While Ardour reportedly works with ASIO (Tracks Live is shipping by Waves, after all), Paul Davis isn’t ready to provide full support to users of this operating systems without being backed by several volunteers willing to answer platform-specific questions to a whole new user group.

Hence initially, the Windows port will only be available as a nightly build with the new limitations to unsubscribed users: muting all outputs after 10 minutes of work. As soon as all infrastructure and human resources are in place to fully support Windows users, the Windows port will become official.

If you are interested to help Paul, please read this page on Ardour's website and take action.

What’s next

Shortly before releasing v4.0, the team started making various improvements, some of which have been in the roadmap for a while, e.g. the Save As feature. Robin Gareus explains:

The general idea is to do more small/short-lived ‘feature branches’ in the future and keep git master branch clean(er).

To which Paul adds:

The three key areas of further work are likely to be media management and extending mixer capabilities, as well as general improvements throughout the application.

By now the amount of various small changes is reaching the point where the release of v4.1 may become imminent, although, of course, there is no rush.

Downloads

All current downloading options are listed on the project’s website.

Synfig Studio 1.0 released

$
0
0

Patience is one tough lady: after 13 years in development, free 2D vector-based animation application Synfig Studio finally gets the golden badge of v1.0, delivering a sleigh of improvements and new features.

What's new

The new version has a handful of major changes and improvements. Many of them were crowd-sponsored by Synfig's community in 2013—2014. Here are just some of them:

  • single-window UI based on GTK+3, with various improvements;
  • full-featured bone system;
  • new Skeleton Distortion layer to apply advanced image distortion;
  • new non-destructive Cutout tool for cutting bitmap images;
  • initial implementation of a sound layer with support for WAV, Ogg Vorbis, and MP3 files;
  • dynamics converter which adds basic rigid body physics: torque, friction, spring, inertia etc.

For a more complete list of changes please read the release notes. Here's e.g. a somewhat lengthy demonstration of the bones system developed by Carlos Lopez Gonzalezand Ivan Mahonin:

What did it take them so long to release 1.0 then? Well, among many things, there's a historical heritage involved.

Going back in time

The project started in 2002 as an experiment by Robert Qattlebaum in creating an animation tool that would automate tweening. In 2004, Robert launched Voria Studios, an animation studio that used Synfig as its main in-house production tool.

Even though Voria didn't last long (it was closed later same year), its first short animation movie, Prologue, was so good that not only it got positive feedback at both AnimeExpo 2004 and ComicCon 2004 — the scenes were used to demo Synfig for years to follow, while Synfig itself inherited the logo of Voria.

In an interview to OSnews, published in early 2006, Robert stated:

I decided to open source Synfig because I had reached a point where I relieved that there was no realistic chance of being able to successfully put Synfig on the market. Ultimately I'd rather everyone be able to use Synfig than no one, so I decided to go ahead and release it to the world. It was actually always my intention to open source Synfig if my business failed.

Indeed, after the studio closed, Robert had different options, and one of them was getting some big company interested in Synfig. He spoke to Apple, and they refused, so during summer 2005, he released the code under GPL. Interestingly, two years later Apple hired Robert to work on QuickTime/CoreVideo, then GameKit, then Display Systems.

From the very beginning, Robert chose a rather modest release numbering scheme for the project. The first release of Synfig as a free/libre software was v0.61.00. For the next 9 years, this scheme was maintained, even after Robert left the project in early 2007.

Now Synfig is finally where it should have been years ago: at v1.0.

What's next

According to Konstantin Dmitriev, the next big step for the project is improving rendering performance, which is why the team is now refocusing on OpenGL support.

If you are really curious, there are two Git repositories to follow: Exhibit A is a branch of Synfig where main development work on OpenGL is being done, Exhibit B is a code repository where Ivan Mahonin is doing various related tests. Note, however, that for end-users it's just too early to go knee-deep into unstable code right now.

Synfig 1.0 with Sita character opened

There will be further user interface improvements, but until the whole OpenGL thing is done and out of the way, they are likely to be minor. Mostly it depends on how much busy Yu Chen (main UI code contributor) will be.

Additionally, no crowdfunding campaigns are currently planned. Konstantin explains:

We used to "sell" priorities before, but we can't do it again yet, we only work on OpenGL right now. We could try to raise funds with this particular feature in mind, but that only makes sense once we have a working prototype. It would be risky though: since hardware acceleration on multiple platforms is involved, we expect all sorts of pitfalls, especially on Mac.

Nevertheless, you can still support the project financially by either donating or paying for a training package which will soon be updated to match the changes in v1.0.

It has to be said that while monthly crowdfunding campaigns were ongoing, the team used to be a inch away from failing nearly every time.

Admittedly, the community of animators on Linux isn't all that large to safely secure the sponsoring of the project, and Windows/Mac users have other affordable options, which might explain why commercial-grade showcases of Synfig are a rare beast. Hopefully this is going to change now that v1.0 has been released.

You can download Synfig Studio 1.0 for Linux, Windows, and Mac.

Krita launches second Kickstarter campaign to fund development

$
0
0

Creating a state-of-the-art digital painting application is neither easy nor cheap. After last year's success, Krita is launching their second Kickstarter campaign, this time — to fund painting performance improvements and complete the work on animation support.

We asked Boudewijn Rempt, the project leader, a few rather technical questions about the planned work.

The first of your grand plans for this campaign is taking on Photoshop in terms of painting speed. What exactly are you going to do?

We knew all along that Photoshop works interactively at reduced sizes — basically on a mipmap of the original image content, not just the display, and probably also compresses the image data it isn't using directly.

So last year, during last year's kickstarter, Dmitry started looking into how that would function — you have to juggle virtual memory, the mipmap, the compression... And we got far enough for a proof of concept.

It needs to be done right inside Krita's core tile engine, which is coincidentally also where we need to make changes to support frames in paint devices (for animation).

With a "let's make it faster than Photoshop!" slogan wouldn't you end up in a situation where people would request running sensible comparative tests between two apps when you're done?

That's not unlikely. The main focus is painting: what we did last year was draw a diagonal across the canvas and visually check lag. If you set the cache levels in Photoshop CS2 to 1, then it behaves much like Krita.

Isn't there a way to measure that lag programatically?

I haven't looked into that. We do have some benchmarks that measure painting speed, though

Do you expect using these benchmarks much or rely on perceptual tests?

It's up to Dmitry, but for comparison, mostly perceptual tests, I think. After all, we can't instrument Photoshop. The most important things is that painters should feel it's just as fast and smooth so they can switch to free software, to Krita, without giving up productivity.

On the campaign's page you mention upcoming support for "big brushes with a diameter over 3000 pixels". Is it about meeting the rising market's demand for content that looks good on 4K and beyond?

Yes, we see that people are making bigger and bigger images, using bigger and bigger brushes. I'm not totally sure why. Sometimes people work with resolutions that seem unnecessary high, especially for web comics. But on the other hand, if they want to go to print, it might be necessary.

You have a student who's already working on animation within Google Summer of Code 2015 program, and now you propose another paid project for that feature. How will those converge?

Basically, doing animation right needs more than one student: we expect we'll need at least 3 months full-time work from either me or Dmitry as well.

So at least two people overall?

Yes, it's a big job, and Jouni Pentikäinen is doing really well, but it's too much for just one guy. We really want to avoid another proof-of-concept that's almost ready to be merged, but not quite.

This is what time you start adding animation to Krita now?

This is our fourth attempt at animation, and we want to do it right, with all that we've learned before.

Regarding implementation details: is some sort of basic tweening planned, as much as is possible for bitmaps?

What we want to do is put (nearly) all properties of layers and masks on a timeline with a curve. Which means that a transform mask will transform in between keyframes, and that will transform the associated raster data. Same with filter masks and layers.

So there will be keyframes?

Yes. In fact, they already are there, we've got a working prototype, tough using it is an exercise in working around bugs, of course.

But you don't have the image processing core exposing properties to the animation engine yet?

No, that's part of the plan. Right now, only swapping pixel data is 'done'. We always used to have animation as a plugin, away from Krita's core, right now it's going to be deep in the tile manager.

What do you expect will be the most difficult part?

Memory consumption, which is why we've got the performance target as well — Dmitry and Jouni have spent a big part of the sprint weekend two weeks ago doing a really careful internal design.

Do you mean consumption bump would happen because of calculating and executing all transitions?

Not just that, consumption all in all. People will want to do a 30 second clip with 10 layers at 1920x1200 or even higher. That's never going to fit in memory if we're naive about it.

Will you have to rewrite the tile manager much?

Parts of it, part of the rendering engine. Dmitry wants to build on his level-of-details proof-of-concept he did last year.

Will there be some smart caching/baking involved?

Both, I suspect. Caching of layer data, baking of rendered frames.

Meanwhile Blender is going further with their own painting feature set, if you saw the recent video...

Yes, the world is big enough for Blender and Krita :-)

When we're done, potentially every Krita's .kra file will have animation, so one thing I'm looking forward to is people like David Revoy using that to make Pepper and Carrot even cooler.


You can support Krita on Kickstarter. If you aren't familiar with the software yet, grab your download at krita.org, it's free forever.

ZeMarmot, open animated movie to be made with GIMP, Blender, Ardour

$
0
0

Jehan Pagès, Aryeom Han et al. launched a campaign to crowdfund an animated road movie made entirely with free software.

"ZeMarmot" will tell you a story of a cute marmot who wakes up one day to break out of his comfort zone and explore the world. There will be new friendships, exotic islands, and even flying carpets.

The team will only usee free software for all of production, including soundtrack: GIMP, Krita, Blender, Ardour etc. All project data will be released under terms of CC BY-SA, so that you could freely study the animation process and share what you learned. The team already released the project data of the teaser.

There's also some development work expected to be done: improvements in GIMP's animation features, extending OpenRaster to support animation, improving Blender VSE.

The imminent goal is to collect 9,000 euro, but would it suffice? We asked this and a few other questions to Jehan Pagès.

Jehan, it's hard not to notice a certain resemblance: the marmot will travel eastwards from Alps, while you yourself have an experience of motorcycling across all of Euroasia eastwards as well. This can't be a coincidence! :)

Yet that's still a fiction. The Marmot is not taking the same route as I did. I went much higher north (Kazakhstan, Russia, Mongolia…).

ZeMarmot

My current scenario is taking Marmot in some places I've never been, for instance, like Iran. Of course the script may evolve, but the point is that it is not a autobiography anyway. I'm trying to have more funny and exciting experiences for the movie. Not that my adventures were not exciting, but differently. :-)

You mention Krita as one of the tools to be used, and the Krita team have some grand plans for animation. But you will be hacking on a subset of animation features in GIMP rather than joining Krita. Why?

Krita is not used right now (that is, it has not been actually used at all in the teaser). We did mention Krita, but this is mostly to mean that we are not closed to other software, since a lot of people will just cite Krita, when we talk about painting with FLOSS.

I know that's not a popular view, since some people continue to spread the word that GIMP would only be good for photography editing, but that's just not true.

Yes, I expect you'd get a lot of questions about that.

We were actually asked a question during our LGM talk, whether Aryeom did some of the drawing physically first, then scanned and edited on GIMP, since the image has a very nice and warm "hand painted" feeling. But her answer was unequivocal: nope, 100% GIMP, from start to end! These are 100% digital drawings, and I think we could show these to anyone saying GIMP can't be used for serious painting.

But you aren't exactly married to using single software for digital painting?

We are open-minded, and if we happen to work with other animators who will want to use Krita or MyPaint, that won't be a problem at all. And, of course, there is also a possibility that Aryeom would switch to Krita or MyPaint, if ever she felt the need or if she suddenly decided that they were much better than GIMP, or whatever.

Being a GIMP dev does not block the situation. These applications are still free software and good at what they do. And if needed, I would gladly contribute code to Krita. I am very software and technology-agnostic. I don't care about Qt or GTK+, or whatever.

You have already made some improvements in the GIMP's animation plugin, haven't you?

I had already been working on it, on and off, for a bit now. Some of the changes have already been in the unstable branch since 2013, most are in a private branch on my computer, because I don't like to show unfinished code. So it's not like I just started a new project. I was working on it long before. Aryeom tested various versions of my modified plugins.

As for animation in Krita, I have never been good at waiting for others to do things, when I can do them myself. If I were to wait, who knows when it would be ready. We all know an announcement is not the same as a release. And finally: I like getting my hands dirty. ;-)

Regarding improving animation in GIMP: do you expect to pull anything from GAP at all?

I will have to have a closer look, but I don't really think GAP has much to offer us. The core feature of what I want to do is not extraordinarily complicated anyway. The main job will be to get the right UI, and for this GAP won't help at all (I'm sure it is good for various use cases, but not for cell animation).

Your patch for Blender from about two years ago hasn't been accepted yet, what's up with that?

Well, Ton Roosendal told me by email that they discussed the topic during one of their developer meetings. Basically people seemed to like the feature on the bug report, this is not the problem. The problem is that they had no maintainer for Blender VSE. So they would not approve (nor reject) any new features without a maintainer. This applies to features only, bug fixes are accepted — I submitted two of them, both have been applied.

Ton even told me at the time that I could propose to become the new Blender VSE maintainer. Unfortunately, I really don't have the time right now to go that far into my Blender involvement.

I have other features I want to work on Blender VSE anyway, so I will push further things. If I can, I will work these as plugins now, but some features I want to work on definitely can only go to core, I think.

The script so far accounts for ca. 40 minutes, but you are aiming for mere 9K euro. For anyone who follows crowdfunding at all that would ring a not-going-to-deliver bell. Do you expect to cut the script and maybe make a ca. 10 min animation movie? Or do you expect extra funding coming from elsewhere?

Basically yes, we would just do a very short animation instead. Maybe a small episode which calls for more. We will also adapt the animation quality with funding. Rather than aiming very high and do nothing if we fail, we chose to be flexible.

Now that's not ideal, and if we get 9000, that would not really be a real success for the operation in my opinion. Seriously it barely reimburses what I already used from personal funds for the last few months to prepare for this all. But hey, that's life!

We wondered about this for a long time (even until the very last second when we clicked the "Submit" button on Indiegogo), and pondered pros and cons. But yeah — that's what we ended up doing.

A lot of people have also grown wary of flexible funding on Indiegogo, because it's an easy way to run off with the money. Do you think the choice you made would affect you, because even people in the GIMP community probably haven't heard about you as much as about e.g. Mitch Natterer?

Both Aryeom and I have been working with Free Software and Libre Art for some time now. If the funding fails, we won't disappear, I will still work on GIMP and other Free Software, as I've done for the last 3+ years. I will likely implement anyway what we were planning for this project, except that it will probably take years, on and off, on free time, instead of a few months.

Quick demo of symmetric painting implemented by Jehan, expected in GIMP 2.10

Aryeom will still contribute designs to Free Software (she started to do so for GIMP with an icon, and probably soon others), and do Libre Art, like our Wilber & Co. comic strip (notably in GIMP Magazine).

But in any case, we would not be able to work on the movie itself. That takes a lot of time and is a lot of work. This just need some full time dedication.


ZeMarmot campaign is using flexible funding which means they will get the money even if they don't the requested 9,000 euro. You can support them on Indiegogo.


Anatomy of SourceForge/GIMP controversy

$
0
0

SourceForge, once the most popular and respected hosting for free/libre projects, is taking another self-inflicted reputation hit. The recent controversy involving GIMP is all about ethics, while on the SourceForge's side it appears to be about money.

If you follow tech industry at all, you couldn't have missed a slew of reports yesterday that SourceForge took control over abandoned gimp-win account where GIMP installers for Windows used to be distributed from, and started providing their own offer-enabled installers instead. Ars Technica did a nice coverage of that, but there is oh so much more to the story.

Offer screen

Screenshot of the installer, courtesy by Ars Technica

Obligatory disclaimer: being affiliated with the GIMP team, I'm naturally under suspicion of being biased, so if you find any of the claims below subpar to expected journalism standards, by all means, do use the comments section to point out mistakes.

How this became even possible

A fair question one might ask is how builds of GIMP for Windows ended up on SourceForge in the first place.

Historically the GIMP team has been somewhat relaxed in how 3rd party efforts were organized. E.g. the official user manual is still a semi-separate project, with its own Git repository, its own team, and its own release schedule. Similarly, both Windows and OS X builds used to be 3rd party contributions, both hosted at SourceForge, one built by Jernej Simončič, the other — by Simone Karin Lehmann.

Jernej recalls:

I started building the installers for GIMP in 2002, and I initially hosted them on the space provided by my then-ISP, Arnes. I moved away from them a few years later, and while I could probably have arranged with them to keep hosting the installers, I already had a SourceForge account, so using that seemed simpler. For a long time SF was the place for hosting binaries for open-source projects — nobody else had comparable infrastructure, when they offered file hosting at all.

This started changing in the recent years. The team began working with contributors more closely, e.g. pulling Mac-specific fixes from builds by Simone. The other related change, which is at the heart of this topic, was moving Windows installers from SourceForge over to gimp.org.

Why GIMP-Win left SourceForge in 2013

First of all, problems with SourceForge are older than some people might expect. At some point in mid-2000s, SourceForge stopped evolving as fast as it used to and focused on advertising-based revenue. This allowed them to go from $6mln in 2006 to $23mln revenue in 2009. But it also alienated free software developers due to poorer service quality. Various projects started moving away.

Among the reasons — context ads on SourceForge download pages, fine-tuned by scammers to pose as download buttons and trick users into downloading the wrong installer, typically containing adware. GIMP users who went to SourceForge for downloads ended up with something entirely different.

Exhibit A:

My girlfriend downloaded the GIMP windows build referenced off the GIMP.org website and it seems to have a Malware/Adware package called "Sweetpacks" bundled with it. I realize that the Windows version of GIMP is linked with a "hey, this isn't us" kind of disclaimer but the fact that GIMP.org links to it gives the sense that its contents are trustworthy or, at least, not hostile. If there is really no validation of that distribution and it contains these kinds of softwares then it may not be such a good idea to have GIMP.org linking to it.

Exhibit B:

When I downloaded this recommended free banner software from the help section, I also got a virus downloaded along with it called CLARO search engine. It will infect all your browsers and you will not be able to search on anything except this stupid Claro search. I had to uninstall all my browsers and switch back to IE instead of Chrome, because reinstalling Chrome still came with this insidious malware. DO NOT download GIMP.

Exhibit C:

I want to recommend GIMP to Windows using friends, but it is not supported officially for Windows. Even worse, the download link for the Windows build goes to an ad-driven filesharing site with ads masquerading as download buttons. A friend on mine clicked on one of these and her antivirus software went nuts! This is a serious problem! Is there anything we can do to help? Does anyone know the dev for the Windows build? I will not be able to recommend GIMP to Windows using friends until that problem is solved! :gaah

The stream of complaints kept on growing, and eventually it became impossible to figure out if users were talking about false positives (Kaspersky antivirus software used to be particularly bad at handling GIMP installers) or fake installers full of actual malware.

Where's the money?

Over time the ads-based monetization strategy at SourceForge became increasingly aggressive. Seeing up to four 320x240 AdSense banners on a downloads page became the new norm for users. Despite introducing a reporting feature, SourceForge couldn't prevent all malicious banners from displaying on their web pages.

Ads on SourceForge

Google AdSensense's Ad placement policy: "Currently, on each page AdSense publishers may place [...] up to three AdSense for content units". There are four units here.

Nevertheless they continued with this strategy, and in 2013, SourceForge introduced a program of sharing revenue from ads with actual developers, to which the GIMP team initially agreed. Michael Schumacher, GIMP's treasurer, explains:

The summary of their proposal is like this: "Hey, you are an active and popular project, if you link to your SourceForge downloads from your site, you will get money depending on the number of downloads".

At some point the issue of those ads deceiving users just got unbearable, and we cancelled that, when we abandonded SF in 2013. Since GNOME handles our financial account, Karen Sandler, GNOME's executive director at the time, was involved with this too. I told Karen that we'd return any of the money, if this was deemed appropriate. She didn't tell me to do so.

On November 5, 2013, GIMP team issued an official announcement that they stopped hosting official downloads of Windows installers at SourceForge:

In the past few months, we have received some complaints about the site where the GIMP installers for the Microsoft Windows platforms are hosted.

SourceForge, once a useful and trustworthy place to develop and host FLOSS applications, has faced a problem with the ads they allow on their sites - the green "Download here" buttons that appear on many, many adds leading to all kinds of unwanted utilities have been spotted there as well.

But that was only the first reason. Here's the other one.

The tipping point was the introduction of their own SourceForge Installer software, which bundles third-party offers with Free Software packages. We do not want to support this kind of behavior, and have thus decided to abandon SourceForge.

The team insists that this was intended as criticism on this approach, and that they explicitly stated that in their communication with SourceForge. This news was also duly noted in The Register's coverage of the events, as well as at Slashdot which, like SourceForge, is also owned by Dice Holdings. In other words, the lack of team's interest in providing offer-enabled installers was communicated both directly and publicly.

In their rebuttal, posted on November 14, 2013, SourceForge representatives stated this about the offer-populated installers:

This is a 100% opt-in program for the developer, and we want to reassure you that we will NEVER bundle offers with any project without the developers consent.

However various members of the GIMP team state that they explicitly opted out. In recent a Reddit thread Jernej Simončič, under the handle of 'ender', claims:

They offered us to bundle "offers", which we specifically declined shortly before moving the installer to GIMP's own servers.

Nevertheless, some time between November 2013 and now, SourceForge ignored that the GIMP team opted out of the offers program, took over the gimp-win account, and started distributing offer-enabled installer of GIMP, which at least one team member explicitly forbid them to do, and then they allegedly took all the revenue.

Exhibit D, from November 2014:

I went to SourceForge and tried to download GIMP twice and chrome would not allow the download because of MALWARE.

On May 16, 2015, Jernej Simončič sent the following request to SourceForge:

Please remove the gimp-win project from SourceForge. I do not want any kind of "offers" forced on the users of my installer, and if I knew this was going to happen, I would have shut down the project myself.

As of May 28, 2015, he reports he hasn't heard back from them yet.

The best part comes now. First of all, the offensive installer has already been silently pulled off SourceForge, without any apologies. Secondly, in another official rebuttal posted on May 27, 2015, SourceForge says that they didn't hijack the 'gimp-win' account, instead they "stepped-in to keep this project current" and "established a mirror of releases that are hosted elsewhere". The mirrors were supposed to only store verbatim copies of all installers provided by the upstream projects.

They also made this very claim:

Since our change to mirror GIMP-Win, we have received no requests by the original author to resume use of this project. We welcome further discussion about how SourceForge can best serve the GIMP-Win author.

What it effectively means is:

  1. SourceForge had 11 days to reply Mr. Simončič's request prior to their post in their blog on the controversy, and they allegedly haven't done it so far.
  2. SourceForge claims to welcome further discussion, but doesn't not participate in ongoing discussion, and comments on their blog appear to not get approved.
  3. The only way to get SourceForge to talk at all is raising public awareness at Reddit, HackerNews, followed by coverage in popular media like Ars Technica.
  4. Even then, SourceForge would talk to the media (see updates to Ars coverage), but would not talk to actual team members.

LGW ended up emailing these three questions to SourceForge:

  1. Could you please quote the part of the program's conditions that allows bundling offers for software projects that opted out?
  2. How, in particular, was the decision made to bundle offers for gimp-win project without developers' consent?
  3. Is it correct that in case of projects that opted out, any revenue from bundled offers goes to SourceForge/Dice only?

So far SourceForge's team have been unable to come up with any reply at all.

Update (May 30). Three days into the public leg of the drama, Jernej Simončič finally gets contacted by SourceForge who claim his request was never received. 

Update (May 31). GIMP posts an official response to SourceForge's action. Meanwhile the news have already made it to ExtremeTech, ITWorld, PetaPixelGolem.de, and other popular media.

Update (June 1). Slashdot, also owned by Dice Holdings, publishes a story on the controversy.

Update (June 2). SourceForge posts another blog entry where they announce that  they "have stopped presenting third party offers for unmaintained SourceForge projects", however they still refrain from explaining why they decided to ship the offer-enabled installer without  GIMP developers' concent. Ars Technica posts a new coverage of the events.

Introducing Antimony, free graph-based 3D CAD system

$
0
0

It seems that boxes and noodles are slowly taking over the world. Matt Keeter uses acyclic graphs in Antimony, a free/libre 3D CAD system streamlined for personal manufacturing.

With QCAD, LibreCAD, FreeCAD, BRL-CAD, and less known projects like ZCAD you'd think we are more or less settled with free/libre software for 2D/3D drafting and modeling. But most existing apps are built around concepts that we know from DXF: layers, blocks etc.

Antimony takes a different approach and relies on graph composition where nodes represent 2D/3D shapes and primitives, boolean and math operations, various transformations etc. That will certainly please a lot of people who are already used to node-based compositing in VFX software.

The project was launched by Matt Keeter who currently works as engineer at Formlabs, a popular 3D printer/accessories vendor.

Editing (or should we say compositing?) in Antimony doesn't rely on just using noodles to connect boxes. The 3D view window also provides basic editing features. You can move an object along an axis or scale it, and corresponding nodes will be automatically updated in the graph composition window. So far Antimony only exports heightmaps for 2.5D processes and STL files.

Matt demonstrated the basics of using Antimony in this great video:

He also kindly agreed to answer a few questions about design specifics of Antimony and his future plans.

What was your main problem with existing CAD tools? I gather it has something to do with "drafting tables", lack of innovation etc. :)

First of all, a brief disclaimer: though I’m competent with Solidworks and Blender, I’m not a practicing mechanical engineer, so take my thought with a grain of salt.

That being said, most sketch-based CAD tools are based on manipulating a big blob of global state (in the form of sketches or solid models); there’s more emphasis on stacking machining operations than encoding user design decisions and intent.

There’s also a discontinuity between the modeling system (in the form of constraint solving and geometry kernels) and user interaction.

Antimony pushes that discontinuity a bit farther from view: instead of “extrude” being an opaque operation that does something in the geometry kernel, it’s a script that you can open up and change.

How much pluggable is everything? I see that nodes definitely are. What about file loaders / savers?

Nodes are very pluggable — Antimony looks into a particular directory on startup and builds up menus from “.node” files that it finds there. Loading and saving of Antimony files is all hard-coded in C++; not much room for modification there.

Exporting is somewhere in between: Python scripts can declare that they want to export something, which calls back into C++ and sets up the UI for an export process, but defining new export formats requires changing the application core (rather than scripts).

Is this something you think might/should change in the future, for both importing and exporting?

Maybe, but the cost / benefit ratio is too high at the moment — it would require some architecture changes, and there isn’t enough demand for custom import/export pipelines to warrant it.

So far Antimony looks like a tool best suited for 3D manufacturing. Do you think it could grow into more directions like mechanical engineering (which means FEM and assembly among other things), architecture and BIM workflows, etc.? Would the app's architecture allow for that? Would it even be a good idea?

I’m definitely focused on personal-scale manufacturing at the moment, making tools for individuals and small teams that work with laser cutters, 3D printers, small mills, etc. This is mostly because of my background — when I was at CBA [MIT's Center for Bits & Atoms — LGW], I did a bunch of work with these smaller-scale tools.

Working with assemblies is a natural extension: once I add graph nesting, it will be very simple to create a top-level file that combines a bunch of parts. For bigger systems and architecture workflows, I’d start getting more concerned with evaluation and rendering speed. On the rendering side, there’s an interesting optimization of creating / caching meshes to save on f-rep re-evaluation when the camera angle changes.

The case of assemblies is particularly interesting, because it might mean huge graphs, and that might impact navigation (among other things). Have you tried creating really large graphs (how large?) and seeing how it works in terms of performance, usability etc.?

The largest stuff that I’ve seen is about ~100 nodes. If those graphs are building a small number of parts (rather than multi-part assemblies), rendering isn’t a huge concern. The bigger challenge is in editing the graph — it’s still responsive if you’re editing things downstream, but making a change to a far-upstream parameter causes a noticeable pause as every script re-evaluates itself.

I think that there’s a lot of low-hanging fruit for speeding up big graphs. For example, I’m not even taking the obvious step of saving compiled Python byte-code; every node is being evaluated as a string (even if the text hasn’t changed).

What's your current development focus?

Right now, I’m doing a bunch of polishing for the 0.8.0 tagged release. One of the big changes is to the library of nodes: a few different people have written nodes, all with different styles and using different ways to define shapes; I’m doing a pass over the library to make it consistent and to make sure that all of the shapes are available in the fab.shapes module.

There are also a bunch of small bug-fixes and polishing going into 0.8.0.

Beyond that point, an unordered list of bigger tasks that I’m considering:

  • Switching to a QML-based UI
  • Optimizing graph evaluation speed (caching, delayed parsing, etc)
  • Computing gradients exactly rather than approximately (for better shaded rendering)
  • GPU-accelerated rendering
  • Using meshes to make rotation in the 3D viewport faster
  • Nested / hierarchical graphs
  • Reviving and extending the (bit-rotted) test suite
  • Figuring out how to build / package for Windows (help appreciated!)
  • Rethinking drawing planes (right now, 2D shapes are always in the XY plane).

Currently Antimony is available in source code and DMG builds for OS X users.

Krita raises over €33,000 at Kickstarter

$
0
0

Earlier this week, Krita Foundation successfully raised the money to fund 6 months of work on the increasingly popular free digital painting application for Linux and Windows.

The campaign launched on May 4. Two weeks into the fundraiser, 643 backers brought €20K (the baseline for the project to succeed), then 322 more pledged another €10,520. Additionally, the team received €3,108 donations via PayPal and will use that money to work on features from the list of 24 stretch goals.

Much like last year, the team started working on some of the stretch goals already during the fundraiser: modifier keys for selections, stacked brushes, basics of memory management (reporting if you are about to overuse RAM). The upcoming stable v2.9.5 release will feature these an many other newly added features and fixes.

File size warning in Krita

File size warning in Krita

In the coming days developers will be processing submitted surveys from users who pledged €15+ and thus got the right to vote for stretch goals, then continue working on both core tasks — performance boost and animation — and the stretch goals.

Both Google Summer of Code projects — animation and tangent normal map brush engine — are being actively worked on. You read Jouni Pentikäinen's blog to follow his progress on animation.

Meanwhile the team published several interviews with artists who depend on Krita in their work, including David Revoy. A longer and very insightful interview with David was also done by Erik Moeller; it focuses on topics such as art, merits of different licenses, crowdfunding models etc.

GEGL gets mipmaps, 71 new image processing operations

$
0
0

GIMP's new image processing engine got its first update in three years, and it's so full of awesome you'd cry and demand GIMP 2.10 released right next to it.

Supernova GEGL operation in dev. build of GIMP

Supernova GEGL operation in development build of GIMP

A lot of work has gone into making GEGL faster. There's still a lot of work to be done, but the new version features major improvements such as:

Better thread-safety and experimental multithreading support. You can run e.g. '$ GEGL_THREADS=4 gimp-2.9' from terminal window. But don't expect this to automagically improve performance: it still needs a lot of testing, and developers are interested in thoughtful reports.

Experimental mipmaps support. If you are not familiar with mipmaps, here's the basic idea. Instead of working on a huge image in its entirety, an application generates a smaller version of the original image and processes it for preview. While you are evaluating the preview, it silently chews the real thing in the background. Again, it's an experimental feature currently not used by GIMP, whether it will prove to be GIMP 2.10 material depends on contributors activity.

New default tile backend writes to disk in a separate thread. This should make GIMP more responsive while saving/exporting files.

GEGL 0.3 also got 71 new image processing operations. Mostly they are ports of existing GIMP filters, and that automatically makes them eligible for the future non-destructive editing workflows. A lot of that work was done by Thomas Manni who is among the most silent and hard-working GEGL contributors of late.

However, porting GIMP filters to GEGL doesn't necessarily end at writing a GEGL operation and compatibility code for GIMP to keep the operation accessible for plugins and scripts. Some GEGL filters like the Fractal Explorer have a lot of options, hence automatically generated user interface may simply not fit even a 4K display vertically.

Automatically generated UI for Fractal Explorer port on a 1920x1280 display

Automatically generated UI for Fractal Explorer port on a 1920x1280 display

To fix that one needs to write a custom user interface in GIMP. This started creeping into GIMP's code base about a year ago. Diffraction Patterns operation is among notable examples of making a familiar interface with all the benefits of using GEGL tool's skeleton, such as presets and live preview on canvas.

Diffraction Patterns has a compact custom user interface much like the original GIMP plugin

Diffraction Patterns has a compact custom user interface much like the original GIMP plugin

On a related note, one of slightly nerdy new features of this GEGL release is 'ui_meta'. Basically, now GEGL operations can provide useful hints to GEGL-based applications about best ways to render user interface for various properties.

Here are just a few examples. If you want GIMP to display a rotary widget for the quick setting of an angle, you can add 'ui_meta ("unit", "degree")' to the property in question.

Rotary widget for quickly choosing an angle in development build of GIMP

Rotary widget for quickly choosing an angle in development build of GIMP

The ("unit", "relative-coordinate") meta will create a button next to input field, by clicking which you will be able to pick a relative position from your image, for example, the center for a Zoom Motion Blur effect.

Additionally, if there are two adjacent properties, where the first one has ("axis", "x") meta, and the other one has ("axis", "y"), GIMP will create a chain button for these two values, so that you could e.g. lock ratio between the two values or keep them equal.

X and Y values can be locked to each other, and you can pick a relative position

X and Y values can be locked to each other, and you can pick a relative position

More work needs to be done on range of proprerties' values exposed in user interface.

But wait, there's more. Jon Nordby backported all the changes to GEGL he made while working on The Grid, an artificial intelligence based CMS that relies on GEGL for all image processing work. One of them is reading custom GEGL operations written as JSON files.

img_flo web app for creating node compositions with GEGL operations

img_flo web app for creating node compositions with GEGL operations

The idea is to reuse the concept of meta-operations already available in GEGL for a very long time. E.g. such a core filter as unsharp mask is actually a meta-operation that combines the use of several other operations: add, multiply, subtract, and Gaussian blur. You can create your own meta-operations of any complexity with img_flo web app, then use them from within GIMP.

Finally, just to avoid confusion, newly released GEGL 0.3 is not something you can "install" into existing stable version of GIMP and automatically get all the new features. It's best to treat this as a foundation of what's coming in GIMP 2.10 and beyond.

92 people contributed to making GEGL 0.3 happen, but there's still plenty of contribution opportunities for everyone: porting more filters, improving default range of values, descriptions etc., making further performance improvements, adding new exciting features

ArgyllCMS 1.8.0 released with support for SwatchMate Cube colorimeter

$
0
0

Graeme Gill released a major update of ArgyllCMS with newly added support for two color measurement devices from opposite ends of price and quality spectrum.

The first supported instrument is SwatchMate Cube, a little fancy colorimeter you can carry around to pick a color swatch from wherever you want, then review the acquired palette on your mobile device (iOS, Android), paste to your Photoshop project etc.

SwatchMate Cube

Cube was successfully crowdfunded a year and a half ago on Kickstarter and caused quite a bit of media excitement as if it was the first portable device ever to pick colors from physical objects (it wasn't).

Graeme got a Cube mainly for two reasons: because it was made in Melbourne where he lives, but also to see, how this entry-level device (ca. $180 USD) stacks up against more expensive and more commonly used instruments like X-Rite ColorMunki. He ended up writing a two-part article where he explained why and how much exactly readouts by Cube are hit and miss (especially for glossy surfaces), and how the device could be further improved.

The other newly supported device is EX1 by a German company called Image Engineering. EX1 is a spectrometer for measuring light sources. At 2.800,00€ it's not exactly something you would throw some spare cash, but rather something you get to ensure the highest color fidelity in the professional environment.

Image Engineering EX1

Other changes include:

  • support for Television Lighting Consistency Index (EBU TLCI-2012 Qa) in spotread and specplot apps' output;
  • support for adding R9 value to CRI value in spotread and specplot apps' output;
  • various bugfixes, library dependencies updates etc.

For a complete list of changes have a look at the website. In addition to source code, builds are available for Linux, Windows, and OS X.

Graeme also updated his commercial ArgyllPRO ColorMeter app for Android. The new version features pretty much all improvements from the new ArgyllCMS release. It also receives readouts from Cube via Bluetooth Low Energy (USB is available too) and supports using the ChromeCast HDMI receiver as a Video Test Patch Generator. As usual, a demo version of the app is available

Viewing all 328 articles
Browse latest View live