Quantcast
Channel: Libre Graphics World. Blog
Viewing all 328 articles
Browse latest View live

Red Hat releases free/libre Overpass font family

$
0
0

Red Hat announced the release of Overpass, their own highway gothic font family designed by Delve Fonts. Overpass is available under terms of SIL Open Font License.

In 2011, the company commissioned the project to Delve Withrington. The idea was to reuse Standard Alphabets For Traffic Control Devices and adapt it to screen resolution limits. Originally Delve and his team created just Regular and Bold upright faces. However, in 2014, Red Hat returned to Delve and his team for more weights and faces: under Delve's direction, Thomas Jockin drew the Light weight, and Dave Bailey assisted with drawing the italics.

The first public version of the font family is available in Extra Light, Light, Regular, and Bold weights, in both upright and italic versions. So far Overpass has complete Extended Latin coverage and support for a variety of OpenType features such as fractions, ligatures, localized forms etc.

Overpass fonts specimen

You can download Overpass as TTF files, as well as WOFF, SVG, and EOT. If you are willing to tweak/enhance the font family, source VFB (FontLab Studio) are available on GitHub (it would be nice to have UFO there as well).

We spoke with Andy Fitzsimon, a brand manager at Red Hat, about the history of this project and further plans.

Overpass is based on a typeface standard for spatial navigation. Why did you pick it for user interfaces  and internal websites? Is it because it's something people are already accustomed to?

In the earlier days of the Red Hat brand, a way-finding typeface was chosen for various reasons.  One quality that I've always liked about Highway Gothic, is that it has global cultural association with a common good.

Also, with such prominent characteristics on many glyphs, particularly the angle on many ascenders, It's a self-governing system to write with.  Writing needs to be informative, short and to the point to be visually appealing. That's the type of writing Red Hat wants to do: concise, helpful, and standards-born.

How and why did you choose Delve Fonts to commission the project to?

The Overpass story started with a software distribution branding need. Highway Gothic had the brand look Red Hat was using but not all the options a typographer expects (or any high quality, open source font files).

Red Hat material was already using a commercial digitization of highway gothic that had all the bells and whistles designers love (various weights, condensed text, italics etc). But using that font meant designs had to be rendered in precomposed images, in print and other graphics before being used.

It didn’t make sense to buy a commercial font license for every customer and every community member who touches our software. So branded strings of text had to be baked into images by trained designers with a license. You can see how that would be frustrating if we tried to typographically brand ever-changing UI elements.

The commercial digitisation of Highway Gothic Red Hat was designing with previously was not available as a webfont and quite honestly is still not suited as one, due to the print-focused detailed node coordinates — meaning a larger file size, than is common with similar webfonts.

At first, regular and bold variants of Overpass were commissioned by our engineering department for use in desktop and web UI's to retain the Red Hat corporate look.

Andy Fitzsimon

Andy Fitzsimon

One thing to note: the Overpass regular variant is more of a bold and Overpass bold is more of an extra-bold. which is fine for nav bars and buttons that need to be ...bold.   But when I came on board to the Brand team, my first request from my boss was that we take over the project and expand the series into a light (regular looking) weight for use on the web so that our digital content was a little less “shouty”.

I reached out to Delve as the designer of regular and bold to continue the project and he did a tremendous job!

We put the light weight through it's paces on redhat.com and even used it as the default weight when we made presentations using reveal.js and other websites.

Since that expansion was a success, we moved onto expanding the series with true italics for use in citations and testimonials. We also added extra light and it's italic equivalent so that we could get more conversational when using large font sizes.

Now we're effectively at our first stable release for the entire family — and we are pretty happy to use Overpass as-is for a while.

We chose to continue to work with Delve Fonts for the entirety of the project because that's our working style. We know we're lucky when we have direct contact with a creative expert. Big agencies don't offer the same kind of access and quick collaboration that we've enjoyed when working with Delve Withrington and his team.

Delve Withrington

Delve Withrington

Currently Overpass has extended Latin coverage. Do you intend to get Delve et al. to add Cyrillics, Arabic etc.?

We haven't discussed Cyrillic, Arabic , Indic, Korean, Japanese or Chinese expansions of Overpass yet,  but the repo is on the project page and we're more than happy to accept quality commits from interested designers in the community ;-).

Overpass fonts specimen

Aside from Korea, Japan, and China, we tend to do business using the Latin alphabet. So sponsoring those expansions may be a while off. I personally can’t QA other character sets either. For Red Hat, –  for now; pairing the weights of Overpass with other quality open source fonts like Google's Noto Sans series is enough for us to get by.

What kind of further improvements is Red Hat willing to invest in?

Eventually, we may expand it to introduce a black weight and/or two monospace variants so that code snippets and command line rules can have a Red Hat look.

What would be examples of Red Hat software titles where Overpass was used for branding?

Today, all our software with a web UI uses overpass to express the Red Hat brand. Our customer portal, corporate website, presentations and staff desktops all make use of the font family to do business.

Is Red Hat planning to continue using Overpass in its own branded products now that Overpass is freely available for everyone to use?

As we harden upstream projects into official Red Hat products, we're going to use Overpass more and more to identify the alignment of our brand to what we make. Our commercial competitors have their own typographic languages. So we're not worried about confusing the marketplace when it comes to enterprise software.

Overpass has been open source from the beginning, from the stencils of the SAFTCD to the font files you see today. We think that speaks volumes about Red Hat as a company.

The great thing about our corporate font being open source, is that we get to watch it grow beyond the walls of our business.  Designers will use it for unique and wonderful purposes, some shocking to trained typographers – and that's okay.  It's a tool for everyone.


3D printing support in CUPS demystified

$
0
0

Last week Apple released a new version of CUPS, the default printing system on UNIX and Linux, with what was called “basic support for 3D printers” by pretty much all media, with no details whatsoever. This has already caused some confusion, so we spoke to Michael Sweet and a few other stakeholders about CUPS, the IEEE-PSTO Printer Working Group, and the 3D initiative.

What’s the scope?

Most confusion was caused by the lack of understanding or, rather, the lack of explanation of what CUPS has to do with 3D printing, and how far the PWG’s 3D initiative is supposed to go. This question can easily be answered by the slides from the first birds-of-feather face-to-face meeting almost a year ago.

Essentially, it boils down to these few points:

  • networked 3D printers provide little or no feedback over the network;
  • there is no single standardized network protocol for them;
  • there is no open file format to handle most/all state-of-the-art 3D printing capabilities.

So the idea is that users should be able to:

  • easily access a networked printer that has the required materials, and submit a print job;
  • print multi-material objects in a single-material 3D printer, which means the printer gets instructions to stop at a certain layer, let the user change materials, and then proceed further;
  • remotely track printing progress;
  • receive notifications about clogged extruder, filament feed jam, running out of PLA, etc.

As you can see, these requirements are pretty much what people are already used to when dealing with common networked 2D printers in offices.

To aid that, since their first get-together in August 2014, members of the birds-of-feather meetings have been working on a white paper that defines an extension to the Internet Printing Protocol to add support for additive manufacturing devices. The whitepaper is focused, but not limited to fused deposition modeling and takes into consideration cloud-based printing.

Suggested extensions to IPP include various new attributes like material name, type, and color, print layer thickness, current extruder temperature, various printer description attributes, and more.

While the whitepaper is getting increasingly detailed with each revision, in a conversation with LGW, Ira McDonald (High North, a PWG member, PWG secretary and IPP WG co-chair) stressed:

This is NOT a standards development project in PWG yet (and may never be). We do have several 3D printer manufacturers and major software vendors who have contributed ideas and privately expressed support. But we’re not at the consumer promotion stage yet. We’re engaging 3D Printing vendors and other standards consortia to gauge interest at present.

Currently CUPS is only used as a testbed for the whitepaper. Michael Sweet (Apple, CUPS, PWG Chair and IPP WG secretary) explains:

CUPS 2.1 added a “3D printer” capability bit to allow 2D and 3D print queues to co-exist on a computer. There is no explicit, out-of-the-box support for 3D printers there, but we’ll be able to experiment and prototype things like the white paper to see what works without seeing 3D printers in the LibreOffice print dialog, for example.

So when you read about support for 3D printers in CUPS elsewhere in the news, you should make a mental note of using a lot of quote marks around the word “support”.

Exploring file formats standardization

The whitepaper only vaguely touches the topic of an Object Definition Language to be used and cautiously suggests AMF file format (ISO/ASTM 52915) developed by ASTM Committee F42 on Additive Manufacturing Technologies, comprised of pioneers of additive manufacturing such as David K. Leigh and representing businesses and institutions such as Met-L-Flo Inc., Harvest Technologies (Stratasys), NIST etc.

AMF has certain benefits over some older file formats common in manufacturing: multiple materials support, curved surfaces, etc. Unfortunately, the specification is not freely available which has hampered its adoption.

Additionally the participants of the BOF meetings evaluated other options such as STL, DAE (COLLADA), and, more interestingly, 3MF — a file format designed by Microsoft and promoted by 3MF Consortium that brings together companies like HP, Autodesk, netfabb, Shapeways, Siemens, SLM Solutions, Materialise, and Stratasys.

Earlier this year, Michael Sweet reviewed the v1.0 specification of the 3MF file format. He disagreed with some design decisions:

  • the ZIP container makes streaming production almost impossible and adds space and CPU overhead;
  • the job ticket is embedded into document data (and shouldn’t be);
  • limited material support, the only attribute is sRGB color;
  • all colors are sRGB with 8 bit per component precision, CIE- and ICC-based DeviceN color is missing;
  • no way to specify interior fill material or support material.

Even though the Consortium isn’t particularly open, Michael says he’s been in conversation with both the HP and Microsoft reps to the 3MF Consortium:

Based on the responses I’ve received thus far, I think we’ll end up in a happy place for all parties. Also, some of the issues are basically unknowns at this point: can an embedded controller efficiently access the data in the 3MF ZIP container, will the open source 3D toolchains support it, etc. Those are questions that can only be answered by prototyping and getting the corresponding developers on board.

So there’s still work to do on this front.

For developers, the 3MF Consortium provides an open source C++ library called lib3mf, available under what appears to be the BSD 2-clause license.

Who are the stakeholders in the initiative?

First of all, to give you a better idea, the Printer Working Group is a program of the IEEE-ISTO that manages industry standards groups under the IEEE umbrella.

According to Michael Sweet, several PWG members had expressed interest in a 3D track during face-to-face meetings and offline, so the steering committee agreed to schedule BOFs at subsequent face-to-face meetings, starting with the August 2014 one.

Mixed Tray in Stratasys Connex1 3D printer

Mixed Tray in Stratasys Connex1 3D printer

This is where it gets interesting. None of the current Printing Work Group members are, strictly speaking, core 3D companies. Here’s what it looks like:

  • HP is in partnership with Stratasys and Autodesk (using their Spark platform) and planning to start selling their own Multi Jet Fusion units in 2016.
  • Canon and Fuji Xerox already resell CubePro and ProJet printers made by 3D Systems, and Kyocera got into a partnership with 3D Systems in March 2015 for the very same reason.
  • Brother was last heard (in early 2014) reconsidering to enter the 3D printing market some time in the future.
  • Epson expressed (also in early 2014) the lack of interest in producing consumer-level units and wanted to make industrial 3D printers within next several years.
  • Xerox has been in business with 3D Systems at least since 2013, when they sold part of their solid ink engineering/development team to 3D Systems “to leverage both companies’ 3D printing capabilities to accelerate growth and cement leadership positions”. Moreover, in January 2015, Xerox filed a patent for Printing Three-Dimensional Objects on a Rotating Surface.
  • Ricoh made a loud announcement in September 2014 about jumping into 3D printing business and leading the market, but so far they are simply reselling Leapfrog 3D Printers in Europe and providing printing services in two fablabs in Japan.
  • Samsung, as some sources assert, isn’t planning to enter the market until ca. 2024, however in September 2014, they filed a patent that covers a new proprietary multicolor 3D printing process, and in 2015 they partnered with 3D Systems for a few trade shows.
  • Intel has no related products, but they do support Project Daniel which uses 3D printing to make prosthetic arms for children of war in South Sudan.
  • Most other companies are in the consulting and software/network solutions development business.

Neither of the market founding companies like Stratasys and 3D systems (both launched in late 1980s) are in the PWG. However, since this project is still at a very early stage of evolution, we probably should not expect this to change soon.

Even so, reportedly there’s some off list activity. When asked about the interest of 3D printer vendors in standardization, Michael Sweet replied:

My impression is that while they are interested they are also just starting to look at supporting networking in future products — still a bit early yet for most. Both Ultimaker and Microsoft have provided technical feedback/content that has been incorporated into the white paper, and I’ve been promised more feedback from half a dozen more companies, many of whom actually make printers and software tools for 3D Printers.

The 3D BOF participants have been reaching out to vendors since late 2014, but there are still more companies to talk to. LGW contacted Aleph Objects, Inc., the makers of FSF-approved LulzBot 3D printers. In a conversation, Harris Kenny stated that the team at Aleph Objects hadn’t heard of the PWG 3D initiative before, but is interested in following its progress.

LulzBot TAZ 3D printer

LulzBot TAZ 3D printer

What gives?

While 3D printers are slowly getting common in companies that need rapid prototyping services and even creeping into households of tinkerers, we are not likely to see them as common as 2D printers any time soon.

A recent study by BCC Research suggests that the global market for 3D printing will grow from $3.8 in 2014 to nearly $15.2 billion in 2019. At the same time, another recent research by Smithers Pira estimates the global printing market to top $980 billion by 2018. There’s a deep black abyss between these two numbers.

The good news is that by the time anyone, for good or bad reason, can own a 3D printer, we might already have all the software bits and protocols in place to make it just work.


Feature image is Sculpture #10 by Pyromaniac.

SANE update brings support for over 300 scanners and MFUs on Linux

$
0
0

SANE is not the most often updated pack of drivers and associated software around, but when they release, they do deliver.

Newly released SANE backends v1.0.25 features support for over 300 new scanners and multifunction units, quite of few which have been introduced in the past two years since the last release of SANE.

Relevant changes boil down to improvements in a variety of existing drivers (Canon, Fujitsu, Genesys, Kodak, and more) and arrival of new drivers: epsonds (Epson DS, PX and WF series), pieusb (PIE and Reflecta film/slide scanners). The support status page hasn't been updated to reflect the changes yet.

The scanimage tool finally got support for saving to JPG and PNG (it only saved to PNM and TIFF beforehand).

The release also features a workaround by Allan Noah for buggy USB3/XHCI support on Linux. This should prevent you from "dancing on your left leg while sacrificing a goat" to launch scanning on newer Linux systems.

Expect an update in your Linux distribution of choice or grab source code and DIY.

Afanasy Render Farm Manager Gets Natron Support

$
0
0

Timur Hairulin released an update of his free/libre CGRU render farm management tools.

The newly arrived version of CGRU features support for Natron, free/libre VFX compositing and animation software, and Fusion, one of its proprietary counterparts, by Blackmagic Design.

Timur has great hopes for Natron:

I still haven’t used it in production yet, because it needs to become more stable first. Once it’s done, getting an artist to use Natron should be easy. After all, it looks and behaves a lot like Nuke. Besides, it has a great Python API. For instance, I don’t need to create gizmo’s on TCL like in Nuke.

Once you install CGRU, you will find CGRU's Natron plugins in the cgru/plugins/natron folder. You should add this path to the NATRON_PLUGIN_PATH environment variable. This will make Afanasy node available in Natron. Further documentation is available at project’s website.

Support for Fusion was added by Mikhail Korovyansky. He tested it on v7, but v8 should be supported as well.

Additionally, Keeper now allows quickly changing local render user name, and rules now allow player linking to the current frame.

Given already existing support for Blender in CGRU, getting a complete libre-based studio solution should be closer now.

CGRU 2.0.7 with Natron and Fusion support are available for downloading for both Linux and Mac OS X users.

Book Review: Digital Painting with Krita 2.9

$
0
0

If you are new to digital painting, installing and using Krita is free forever, and the newly published book by Scott Petrovic will get you started in no time.

The reasons I’m personally very excited about this book go beyond the actual content in question. Some of them have to do with the author’s persona, and others — with how this book looks in the general context of manuals on free software for creative professionals.

Lately I’ve been noticing a lot of excitement about Linux being a success, because it’s on your phones/tablets and on the servers that run your favorite websites and services. But talk to a publisher about writing a book on a major, but niche free application, and you typically get red lights unless it’s PostgreSQL or LibreOffice or some other highly visible project.

This has led Scott Petrovic, a designer, developer, and artist from St. Louis, USA, to start his own little publishing company, Louvus Media. It would otherwise be impossible for him to publish a book on Krita — increasingly popular free software for digital painting. Scott ended up single-handedly layouting and typesetting both printable and MOBI/EPUB versions of the book.

One more thing that needs to be told about the author is that he’s pretty much part of the development team in the Krita project. Here’s a quick rundown of Scott’s contributions to date:

  • designed Krita’s new website;
  • implemented saving tools settings between sessions;
  • did most of the UI for the new transform tools;
  • improved brush editor UI in 2.9.x;
  • fixed various bugs.

Scott is also the reason why Krita is now featuring in the ImagineFX magazine since early this year.

Being that much involved with the project tends to make a huge impact on the accuracy of the content. There was simply no single inaccurate statement in the book that I could detect.

books pages 1

Typically publishers hire technical editors to make this possible. Scott got half of the Krita team to become his technical editors, and he’s giving back by promoting the software through the book and sharing a slice of whatever income it will bring with Krita Foundation.

In his own words:

I do believe there is a need for this type of material. Something that will make a big impact on Krita's future and how the graphics community sees it. I hope it will help people see how great open source software can be.

That said, given the author’s background, the book is surprisingly not technical.  Don’t expect to find blending mode equations or explanations of app’s internals. Even the chapter on installation is rather quick and non-verbose (and rightfully so).

Instead Scott focused on teaching actual skills. Nearly every feature is explained by how you can apply the concept in a practical manner.

Another major pro is that Scott used (with permission) other artists’ artworks to illustrate the book, along with his own illustrations which are very nicely done.

book pages 2

If you’ve read books on free software for designers and photographers before, you know that typically illustrations are from the ‘meh’ department at best. Making a book on software for artists look like it’s actually done for artists is how things are supposed to go. But given how real life works, this book makes a major, if belated step forward. Oh well.

For the reference, here’s the contents of the book, chapter by chapter:

  1. User interface
  2. Painting Fundamentals
  3. Layers
  4. Selections and Transforms
  5. Drawing Aids
  6. Adjustments, Filters, and Effects
  7. Brush Editor Overview
  8. Brush Engines
  9. Working with Color
  10. Vector Tools

As you can see, the book covers some tools that are often considered generic, like filters and effects. However, Scott puts these tools into the digital painting context, so you need not worry about it.

Conclusions

In a nutshell, ‘Digital Painting with Krita 2.9’ by Scott Petrovic is your go-to book to get cracking with Krita. It explains both basics and advanced features in a way that gets you to actually try, understand, and actively use them.

You probably won’t treat the book as a complete reference to Krita’s features, but that’s OK. Once you know your way around the software, you can happily live off tutorials by David Revoy et al. and find out more nitty gritty details from conversations with other users online, as well as from online reference that’s getting a lot of attention lately.

So far it’s the first and the only book in English on Krita (there are two more in Japanese), and as far as first things go, this one is pretty amazing.

One last thing that has to be said is that Scott welcomes translations of his book. Should you decide to work on one, you can freely discuss it with him. You can get in contact with him by visiting the chat room on IRC (scottyp) or sending him a message on the forum (scottpetrovic).

GIMP is 20 Years Old, What’s Next?

$
0
0

This week GIMP celebrates its 20th anniversary. There has been a lot of excitement, but there are also some concerns about the project. Some reasonable, difficult questions have to be addressed. The main one is how the project's going to deal with challenges it's been facing for a while. Let's take a look at some of them.

Obligatory disclaimer: I'm affiliated with the project, so you should always treat whatever I write on GIMP with reasonable suspicion.

If you've been following GIMP's progress over recent years, you couldn't help yourself noticing the decreasing activity in terms of both commits (a rather lousy metric) and amount of participants (a more sensible one).

"GIMP is dying", say some. "GIMP developers are slacking", say others. "You've got to go for crowdfunding" is yet another popular notion. And no matter what, there's always a few whitebearded folks who would blame the team for not going with changes from the FilmGIMP branch.

So what's actually going on and what's the outlook for the project?

Project Activity and Features

Here's something you might like to consider before you arrive at any conclusions: the major roadblock is the GEGL port.

That's right: the very thing that is designed to right all wrongs and bring high bit depth precision, non-destructive editing, access to more color spaces etc. is the reason why the vast majority of the work is done by Michael Natterer these days.

Here's what an interested contributor figures out after a bit of investigation:

  • Most cool new features need to based on GEGL, anything based on the old core is verboten.
  • A few features can only be released along with GIMP 2.10 at best. Noone knows when it's out.
  • Nothing new is likely to be added to GIMP 3.0, because it will focus on just GTK+3 port to bring back Wacom support to Windows/Mac users. It also doesn't have a release date.
  • Realistically, most exciting features would be part of GIMP 3.2 or later releases. There's no timeline for those either.

What your average contributor takes from this picture is that he/she shouldn't bother working on something exciting for GIMP, because it's likely to be years before it reaches actual users.

Needless to say, there is not much love in contributing to the GEGL port: it's technically challenging, as it requires getting a grasp of three codebases at once: GIMP, GEGL (the new engine), and babl (a little color format conversion library).

The exception here is porting existing GIMP filters to GEGL operations. It's a documented process, there are examples to learn from, there's a status page to follow the progress. So there have been over a dozen of contributors to this subproject, including coding superstar Thomas Manni. Thanks to their effort a lot of filters available in upcoming GIMP 2.9.2 are actually GEGL operations with on-canvas preview.

What's the solution?

The good news is that GEGL port is actually nearing completion. Once v2.9.2 is out, the team is likely to start wrapping things up and polishing what's already there. Features that aren't complete are cunningly hidden on the Playground page in the Preferences dialog.

The next major milestone is likely to be improving performance (write first, optimize later) which isn't at its best at the moment.

Realistically we could be looking at yet another year of development. How bad is that?

In my experience, development version has been rather stable for daily use for quite a while, but some operations take a lot of time to render. Which basically means that some users are going to be happy with unstable versions as long as releases keep coming, and some — less so.

To Windows/Mac users specifically it means more time with subpar Wacom support.

So for most of us who lack coding skills the solution is to sit tight and encourage developers to complete skinning the mammoth called GEGL.

Crowdfunding

Getting community's money to pay for full-time development of GIMP sounds like a sensible solution to increase project's activity, as witnessed by e.g. Krita users.

There's, however, one thing that crowdfunding cannot fix: human resources.

If you don't have a person to organize everything, you cannot have a successful crowdfunding campaign. It's very nearly a full-time occupation: ask Boudewijn Rempt (Krita) or Konstantin Dmitriev (Synfig). GIMP doesn't currently have someone to do it.

If you don't have an actual developer to work full-time on the project, you can't have a campaign at all. So far no existing contributor has volunteered to work on the project full-time.

Interestingly, it doesn't mean there's no crowdfunding for GIMP at all. The team has been encouraging private campaigns for quite a while, and there have been two cases that could be called a mixed success:

  • In 2012, Nicolas Robidoux launched a campaign on FreedomSponsors to fund his own work on new and better interpolation methods for downscaling and upscaling. Both proposed downscaling samplers have been implemented and available in GEGL for a couple of years, as well as exposed in upcoming GIMP 2.9.2. I've been exclusively using them for downscaling screenshots ever since the code made its way to GIMP in December 2012.
  • In 2013, Jehan Pagès launched a campaign to fund his work on symmetry painting mode. The feature is now complete and waiting to be merged into the main development branch.

Both campaigns were promoted by the GIMP team which explicitly encourages more developers to do this kind of fundraising.

The Usability Quest

The news post on the anniversary graciously says:

Since its public release the project has been evolving in many ways as a testbed for new ideas, which was considerably assisted by adding plug-in architecture.

It's really a fancy way of saying "we added a bunch of features, because why the hell not". Obviously, it couldn't automagically lead to beautiful interfaces. Or, as some users would point out, this lead to one mess of a user interface. Granted, this has been publicly admitted by the team numerous times.

In 2006, through OpenUsability program GIMP got hooked up to Peter Sikking, usability architect currently residing in Germany. A painfully meticulous person, Peter led the team through the whole process:

  1. Defining product vision
  2. Identifying key areas to focus on
  3. Interviewing professional users and analyzing their input
  4. Writing specs and designing interaction.
  5. Writing actual code.

Most proposals coming from Peter have proven to work just fine for everybody, with two exceptions, both of which caused quite a stir:

  • Removing menu from the toolbox and creating a blank image window.
  • The great Save/Export divide that still causes short-living outbreaks of rage every now and then.

Despite overall fruitful collaboration, around 2012 the relationship between Peter and the GIMP team started cooling off, and in early 2015 Peter officially resigned.

Earlier this year, Jehan Pagès decided to reboot the GUI project. He took over gui.gimp.org and launched a new mailing list where existing usability issues could be tackled in a structured manner.

This hasn't lead to any actual changes in code so far, but the structured approach roughly follows that of Peter's team. The main issue here is that Jehan is currently busy with ZeMarmot animation movie which leaves him little time to work on user experience issues in GIMP.

There is no simple solution here either. Usability experts don't appear out of thin air to work on free software for free (Peter was actually paying real world salary to his team out of his pocket to work on GIMP). Given that, it remains to be seen how much the team has learnt from Peter, and how much of that is applicable in further work.

I'm Really Bored Now, What's Your TL;DR?

Like many free software projects, GIMP is facing some challenges that cannot be easily worked around.

Both decreasing activity, lack of centralized crowdfunding efforts, and little work on usability are mostly the result of lacking human resources. The latter can be eventually helped by releasing GIMP 2.10 that is completely GEGL-based (the port is nearly done) and GIMP 3.0 (GTK+3 based), which should encourage developers to contribute new features.

While all this sounds somewhat discouraging, upcoming GIMP 2.9.2 is bringing a lot of much anticipated features that will keep you busy while wrapping up the 2.9.x series proceeds.

Finally, to answer the question, what non-coding contributors could do for the project, allow me to quote the new FAQ section on gimp.org:

  • Post awesome art online and tell people you made it with GIMP.
  • Help new GIMP users in an online forum you visit.
  • Write a great tutorial on getting something done with GIMP and post it online or submit to GIMP Magazine.
  • Do a GIMP workshop in your local community.
  • Improve translation of GIMP and/or its user manual into your native language.

Simply put, the only way to make it right is to get busy.

Morevna animation project launches new crowdfunding campaign

$
0
0

Three years after releasing a community-funded teaser, Morevna project returns to crowdfunding with a revamped story line and entirely new visuals.

Morevna project is a Russia-based open animation project that has been driving the development of 2D vector animation package Synfig for the past several years.

The story is loosely based on a Russian fairy-tale that features a kick-ass female protagonist, an evil wizard, crazy horse chases, getting physical over a woman, dismembering and resurrecting the male protagonist, an epic final battle—and it's all inevitably twisted around a damsel in distress situation. Your average bedtime story for the kiddies, really.

The updated plot is taking place in the future, where robot overlords are just as bad as the wizards of old (with the exception of womanizing, for obvious reasons), and distressed damsels handle samurai swords like nobody's business. Ouch.

Both Morevna and Synfig have the same project leader, Konstantin Dmitriev. Both projects have benefitted from crowdfunding in the past, especially Synfig. But with a new concept artist and, in fact, a new team, it was time for Morevna to get to the next stage.

Last week, Konstantin launched a new campaign to fund the dubbing of the first episode in the first ever Morevna series. The work would be done by Reanimedia Ltd., a Moscow-based dubbing studio that specializes on anime movies and has a bit of a cult following due to high quality of the localization they provide.

And here's an unexpected turn of events: the dubbing will be in Russian only. Moreover, the campaign was launched on Planeta.ru that makes it somewhat difficult for non-Russian users to contribute. So LGW had no choice but to interview Konstantin.

(Disclaimer: the interview was originally published a week ago in Russian. This is its shorter version.)

The promotional video left some questions unaswered. Like a very basic one: how many episodes are planned?

So far we are planning 8 episodes.

You are deliberately focusing on the Russian audience instead of a wider international one. Why?

It's our primary goal to create an anime movie in Russian. It only stands to reason that the campaign would be interesting mostly to the Russian community.

Will there be another campaign to make the series available in English?

No, we are taking an entirely different approach here. We'd have to search for the right team and the right studio, so instead we'll release a fan dubber kit—basically, original video track and stem-exported audio records of music, sounds effects, and voiceovers, as well as the dialogs' text in English.

Anyone then would be able to create his/her own dubbing and release localized video. It's all to be released under terms of Creative Commons license, after all.

Does it bother you at all that the quality of some fan dubs could be subpar? Or is it just the reality that you choose to accept?

It's really not our responsibility. We'll just publish the fan dubber kit and see how it goes. We are really curious about how this will turn out.

We could launch some sort of a competition, but it's something I really hate to do. It's hard enough to tell someone his/her work wasn't good enough even when you see the person did his/her best. So we take the Creative Commons remix way.

Planeta.ru which you chose for the crowdfunding platform isn't even available in English. Is there a way for people to support your project somehow?

Sure, we are on Patreon.

The visuals have considerably changed in comparison to the demo from three years ago. What made the major impact?

When we finished the demo, we realized that our resources depleted. We weren't happy with the outcome. We spent too much time doing technical things like vectorization and too little time being creative.

The way things were going, we couldn't possibly succeed completing all of the movie. So we needed a new approach. A way to keep the visuals enjoyable while relying on technology that we could realistically handle.

Another major factor is the arrival of Anastasia Majzhegisheva, our new art director. She's only 16 years old, but she's very talented and she gets Japanese animation.

Have there been any other changes in the team?

Nikolai Mamashev, who was one of the major contributors to the demo, is still part of the team, but now he mostly does concept art and he's extremely busy in commercial projects.

At certain production stages, like colouring, we started getting kids from school involved, to mutual benefit.

How much has your workflow and toolchain changed?

A lot. It's now more of a cutout animation. We still use elements of frame-based animation, but we don't do any morphing whatsoever.

It's a deliberate change we made after releasing the demo three years ago, and we significantly improved Synfig in that respect. The software now has skeletal animation which also greatly simplifies our workflow.

Basic sound support in Synfig is beneficial too, although, frankly, it could have been better.

As for digital painting, it's Krita all the way now. We barely use anything else.

More than that, we rewrote from scratch Remake, our smart rendering manager. The new project is called RenderChan. It's far more capable and supports free/libre Afanasy renderfarm.

We still use Blender VSE for video editing, but that's pretty much it. We have just a few 3D elements in shots.

Production pipeline is still a work in progress though. We hope to be able to switch to Cobra soon—it's a new rendering engine in Synfig. That means we really, really need to make Cobra usable ASAP.

Have you already succumbed to the international Natron craze? :)

Not really, no. As a matter of fact, I haven't even had a chance to try it. We do all compositing inside Synfig. For now, it's more than enough.

GIMP 2.9.2 Released, How About Features Trivia?

$
0
0

In a surge of long overdue updates the GIMP team made the first public release in the 2.9.x series. It's completely GEGL-based, has 16/32-bit per channel editing and new tools. It's also surprisingly stable enough even for the faint of heart.

Obligatory disclaimer: I'm currently affiliated with upstream GIMP project. Please keep that in mind when you think you stumbled upon biased opinion and thought you'd call LGW out.

One might expect a detailed review here, which totally makes sense, however writing two similar texts for both upstream GIMP project and LGW would seem unwise. So there: the news post at GIMP.org briefly covers most angles of this release, while this article focuses on features trivia and possible areas of contribution.

The GEGL port and HDR

Originally launched in 2000 by a couple of developers from Rhythm & Hues visual effects studio, the GEGL project didn't have it easy. It took 7 years to get it to GIMP at all, then another 8 years to power all of GIMP.

So naturally, after years and years (and years) of waiting the very first thing people would be checking in GIMP 2.9.2 is this:

First and foremost, 64-bit is there mostly for show right now, although GIMP will open and export 64-bit FITS files, should you find any.

That said, you can use GIMP 2.9.2 to open a 32-bit float OpenEXR file, adjust color curves, apply filters, then overwrite that OpenEXR file or export it under a different name. Job done.

The same applies to PNG, TIFF, and PSD files: respective plugins have been updated to support 16/32-bit per channel data to make high bit depth support actually useful even for beta testers.

All retouching and color adjustment tools, as well as most, if not all plugins are functional in 16/32-bit modes. There's also basic loading and exporting of OpenEXR files available (no layers, no fancy features from v2.0).

GIMP also provides several tonemapping operators via the GEGL tool, should you want to go back to low dynamic range imaging.

Mantiuk06 tonemapping operation

There are, however, at least two major features in GEGL that are not yet exposed in GIMP:

  • RGBE (.hdr) loading and exporting;
  • basic HDR merging from exposure stacks.

This is one of the areas where an interested developer could make a useful contribution at a low enough price in the time currency.

In particular, adding a GEGL-based HDR merge tool to GIMP should be easier now thanks to a widget for using multiple inputs to one GEGL operation (which would be exp-combine).

GEGL operations

Currently 57 GIMP plugins are listed as completely ported to become GEGL operations, and 27 more ports are listed as work in progress. That leaves 37 more plugins to port, so the majority of the work appears to be done.

Additionally, GEGL features over 50 original filters, although some of them are currently blacklisted, because they need to be completed. Also, some of the new operations were written to implement certain features in GIMP tools. E.g. the Distance Map operation is used by the Blend tool for the Shape Burst mode, and both matting operations (Global and Levin) are used by the Foreground Select tool to provide mask generation with subpixel precision (think hair and other thin objects).

Various new operations exposed in GIMP, like Exposure (located in the Colors menu) and High Pass (available via the GEGL tool), are quite handy in photography workflows.

Note that if you are used to "Mono" switch in the Channel Mixer dialog, this desaturation method is now available through a dedicated Mono Mixer operation (Colors->Desaturate submenu). It might take some getting used to.

Mono Mixer

It's also worth mentioning that 41 of both ports and original GEGL operations have OpenCL versions, so they can run on a GPU.

And while immensely popular external G'MIC plugin is not going to become GEGL operation any time soon (most likely, ever), since recently it's ready to be used in conjunction with GIMP 2.9.x in any precision mode.

There are some technical aspects about GIMP filters and GEGL operations in GIMP 2.9.x that you might want to know as well.

First of all, some plugins have only been ported to use GEGL buffers, while others have become full-blown GEGL operations. In terms of programming time, the former is far cheaper than the latter, so why go the extra mile, when GIMP 2.10 is long overdue, and time could be spent wiser?

Softglow

Porting plugins to use GEGL buffers simply means that a filter can operate on whatever image data you throw it at, be it 8bit integer or 32-bit per color channel floating point. Which is great, because e.g. Photoshop CS2 users who tried 32-bit mode quickly learnt they couldn't do quite a lot, until at least CS4, released several years later.

The downside of this comparatively cheap approach is that in the future non-destructive GIMP these filters would be sad destructive remnants of the past. They would take bitmap data from a buffer node in the composition tree and overwrite it directly, so you would not be able to tweak their settings at a later time.

So the long-term goal is still to move as much as possible to GEGL. And that comes at a price.

First of all, you would have to rewrite the code in a slightly different manner. Then you would have to take an extra step and write some special UI in GIMP for newly created GEGL op. The reason?

While the GEGL tool skeleton is nice for operations with maybe half a dozen of settings (see the Softglow filter screenshot above), using something like automatically generated UI for e.g. Fractal Explorer would soon get you to lose your cool:

Old vs. new Fractal Explorer

The good news is that writing custom UIs is not particularly difficult, and there are examples to learn from, such as the Diffraction Patterns op:

Diffraction Patterns operation

As you can see, it looks like the former plugin with tabbed UI and it has all the benefits of being a GEGL operation, such as on-canvas preview, named presets, and, of course, being future-proof for non-destructive workflows.

FFmpeg support in GEGL

If you have already read the changelog for the two latest releases of GEGL, chances are that you are slightly puzzled about FFmpeg support. What would GEGL need it for? Well, there's some history involved.

Øyvind Kolås started working on GEGL ca. 10 years ago by creating its smaller fork called gggl and using it for a video compositor/editor called Bauxite. That's why GEGL has FFmpeg support in the first place.

Recently Øyvind was sponsored by The Grid to revive ff:load and ff:save operations. These ops drive the development of the iconographer project and add video capabilities to The Grid's artificial intelligence based automatic website generator.

The FFmpeg-based loading and saving of frames could also come in handy for the GIMP Animation Package project, should it receive much needed revamp. At the very least, they would simplify loading frames from video files into GIMP.

New Tools

The new version has 6 new tools—2 stable, 4 experimental. Here's some trivia you might want to know.

GIMP is typically referred to as a tool that falls behind Photoshop. Opinions of critics differ: some say it's like Photoshop v5, others graciously upgrade it all the way to a CS2 equivalent.

If you've been following the project for a while, you probably know that, anecdotally, the Liquid Rescale plugin was made available a year ahead of Photoshop CS5 Extended. And you probably know that Resynthesizer made inpainting available in GIMP a decade before Content-Aware Fill made its way to Photoshop CS6:

But there's more. One of the most interesting new features in GIMP 2.9.2 is the Warp Transform tool written by Michael Muré back in 2011 during Google Summer of Code 2011 program.

It's the interactive on-canvas version of the venerable iWarp plugin that looked very much like a poor copy of Photoshop's Liquify filter. Except it was introduced to GIMP in 1997, while Liquify first appeared in Photoshop 6, released in 2000.

Warp Transform reproduces all features of the original plugin, including animation via layers, and adds sorely missing Erase mode that's designed to selectively retract some of the deformations you added. The mode isn't yet functioning correctly, so you won't restore original data to its original pixel crisp state, but there are a few more 2.9.x releases ahead to take care of that.

Unified Transform tool is a great example of how much an interested developer can do, if he/she is persistent.

Originally, merging Rotate, Scale, Shear, and Perspective tools into a single one was roughly scheduled for version 3.6. This would prove to be challenging, what with the Sun having exploded by the time and the Earth being a scorched piece of rock rushing through space, with a bunch of partying water bears on its back.

But Mikael Magnusson decided he'd give it a shot out of curiosity. When the team discovered that he had already done a good chunk of the work, he was invited to participate at Google Summer of Code 2012 program, where he completed this work.

Unfortunately, it's also an example of how much the GEGL port delayed getting cool new features into the hands of benevolent, if slightly irritated masses.

Internal Search System

Over the years GIMP has amassed so many features that locating them can be a bit overwhelming for new users. One way to deal with this is to review the menu structure, plugin names and their tooltips in the menu etc., maybe cut most bizarre ones and move them into some sort of an 'extras' project.

Srihari Sriraman came up with a different solution: he implemented an internal search system. The system, accessible via Help->Search and Run a Command, reads names of menu items and their descriptions and tries to find a match for a keyword that you specified in the search window.

Searching action in GIMP

As you can see, it does find irrelevant messages, because some tooltips provide an overly technical explanation (unsharp mask uses blurring internally to sharpen, and the tooltip says so, hence the match). This could eventually lead to some search optimization of tooltips.

Color Management

The news post at gimp.org casually mentions completely rewritten color management plugin in GIMP. What it actually means is that Michael Natterer postponed the 2.9.2 release in April (originally planned to coincide with Libre Graphics Meeting 2015) and focused on rewriting the code for the next half a year.

The old color management plugin has been completely removed. Instead libgimpcolor, one of GIMP's internal libraries, got new API for accessing ICC profile data, color space conversions etc.

Since GIMP reads and writes OpenEXR files now, it seems obvious that GIMP should support ACES via OpenColorIO, much like Blender and Krita. This has been only briefly discussed by the team so far, and the agreement is that a patch would be accepted for review. So someone needs to sit down and write the code.

What about CMYK?

Speaking of color, nearly every time there's a new GIMP release, even if it's just a minor bugfix update, someone asks, whether CMYK support was added. This topic is now covered in the new FAQ at gimp.org, but there's one more tiny clarification to make.

Since autumn 2014, GEGL has an experimental (and thus not built by default) operation called Ink Simulator. It's what one might call a prerequisite for implementing full CMYK support (actually, separation into an arbitrary amount of plates) in GIMP. While the team gives this task a low priority (see the FAQ for explanation), this operation is a good start for someone interested to work on CMYK in GIMP.

Digital Painting

Changes to the native brush engine in GIMP are minor in the 2.9.x series due to Alexia's maternity leave. Even so, painting tools got Hardness and Force sliders, as well as the optional locking of brush size to zoom.

Somewhat unexpectedly, most other changes in the painting department stem indirectly from the GIMP Painter fork by sigtech. The team evaluated various improvements in the fork and reimplemented them in the upstream GIMP project.

Canvas rotation and flipping

Canvas rotation and horizontal flipping. Featuring artwork by Evelyne Schulz.

Interestingly, while most of those new features might look major to painters, they actually turned out to be a low-hanging fruit in terms of programming efforts. Most bits had already been in place, hence GIMP 2.9.2 features canvas rotation and flipping, as well as an automatically generated palette of recently used colors.

Another new feature is an experimental support for MyPaint Brush engine. This is another idea from the GIMP Painter fork. The implementation is cleaner in programming terms, but is quite incomplete and needs serious work before the new brush tool can be enabled by default.

MyPaint Brush tool

Some Takeaways For Casual Observers and Potential Contributors

As seen in recently released GIMP 2.9.2, the upcoming v2.10 is going to be a massive improvement with highlights such as:

  • high bit depth support (16/32-bit per channel);
  • on-canvas preview for filters;
  • OpenEXR support;
  • better transformation tools;
  • new digital painting features;
  • fully functional color management;
  • improved file formats support.

Much of what could be said about the development pace in the GIMP project has already been extensively covered in a recent editorial.

To reiterate, a lot of anticipated new features are blocked by the lack of GIMP 2.10 (complete GEGL port) and GIMP 3.0 (GTK+3 port) releases. There are not enough human resources to speed it up, and available developers are not crowdfundable due to existing work and family commitments.

However, for interested contributors there are ways to improve both GIMP and GEGL without getting frustrated by the lack of releases featuring their work. Some of them have been outlined above, here are a few more:

  • Create new apps that use GEGL (example: GNOME Photos).
  • Port more GIMP filters to GEGL or create entirely new GEGL operations (both would be almost immediately available to users).
  • Create OpenCL versions of GEGL operations.

All of these contributions will directly or indirectly improve GIMP.

With that—thanks for reading!


darktable 2.0 released with printing support

$
0
0

Darktable, free RAW processing software for Linux and Mac, got a major update just in time for your festive season.

The most visible new feature is the print module that uses CUPS. Printing is completely color-managed, you can tweak positions of images on paper etc. All the basics are in place.

print module in darktable

The nice "perk" of this new feature is exporting to PDF in the export module.

The other important change is improved color management support. The darkroom mode now features handy toggles for softproofing and gamut check below the viewport (darktable uses a cyan color to fill out of gamut areas). Additionally, thumbnails are properly color-managed now.

Something I personally consider a major improvement in terms of getting darktable to work out of box nicely is that the viewport is finally automatically sized. No longer you need to go through the trial-and-error routine to set it up in the preferences dialog. It just works. Moreover, mipmap cache has been replaced with thumbnail cache which makes a huge difference. Everything is really a lot faster.

film grain added in darktable

If you care about losing your data (of course you do), darktable 2.0 finally supports deleting images to system trash (where available).

The port to Gtk+3 widget set is yet another major change that you might or might not care about much. It's mostly to bring darktable up to date with recent changes in Gtk+ and simplify support for HiDPI displays (think Retina, 4K, 5K etc.)

The new version features just two additional image processing modules:

  • Color reconstruction attempts to restore useful data from overexposed areas in your photos.
  • Raw black/white point module is pretty much an internal feature that the team hopes you never ever touch (of course you will). It was a prerequisite step towards dual-ISO support and better denoising.

Other existing modules got all sort of tweaks and updates. Most notably, deflicker from Magic Lantern was added to the exposure module.

Additionally, the watermark module features a simple-text.svg template now, so that you could apply a configurable text line to your photos. Which means that with a frame plugin and two instances of watermark you can use darktable for the most despicable reason ever:

making a meme in darktable

The most important changes in Lua scripting is that script can add buttons, sliders, and other user interface widgets to the lighttable view. To, the team started a new repository for scripts on Github.

Finally, the usual part of every release: updates in the camera support:

  • Base curves for 8 more cameras by Canon, Olympus, Panasonic, and Sony.
  • White balance presets for 30 new cameras by Canons, Panasonic, Pentax, and Sony.
  • Noise profiles for 16 more cameras by Canon, Fujifilm, Nikon, Olympus, Panasonic, Pentax, and Sony.

For a more complete list of changes please refer to the release announcement. Robert Hutton also shot a nice video covering most important changes in this version of darktable:

LGW spoke to Johannes Hanika, Tobias Ellinghaus, Roman Lebedev, and Jeremy Rosen.

Changes in v2.0 could be summarized as one major new feature (printing) and lots of both under-the-hood and user interaction changes (Gtk+3 port, keyboard shortcuts etc.). All in all, it's more of a gradual improvement of the existing features. Is this mostly because of the time and efforts that the Gtk+3 port took? Or would you say that you are now at the stage where the feature set is pretty much settled?

Tobias: That's a tough question. The main reason was surely that the Gtk+3 port took some time. Secondly, the main motivation for most of us is scratching our itches, and I guess that most of the major ones are scratched by now. That doesn't mean that we have no more ideas what we'd like to see changed or added, but at least most low-hanging fruits are picked, so everything new takes more time and effort than big changes done in the past.

Roman: The Gtk+3 port, as it seems, was the thing that got me initially involved with the project. On its own, just the port (i.e. rewriting all the necessary things, and making it compile and mostly be functional) did not took too long, no more than a week, and was finished even before previous release happened (v1.6 that is). But it was the stabilization work, i.e. fixing all those small things that are hard to notice, but are irritating and make bad user experience that took a while.

Johannes: As far as I'm concerned, yes, darktable is feature complete. The under-the-hood changes are also pretty far-reaching and another reason why we call it 2.0.0. The Gtk+3/GUI part is of course the most visible and the one you can most easily summarize.

Jeremy: I'd like to emphasis the "under the hood" part. We did rewrite all our cache management, and that's a pretty complicated part of our infrastructure. I don't think this cycle was slow, it's just that most of it is infrastructure work needed if we want darktable's visible feature set to grow in the future...

color balance adjusted in darktable

Darktable seems to be following the general industry trend where software for processing RAW images becomes self-sustained, with non-destructive local editing features such as a clone tool, as well as sophisticated selection and masking features. In the past, I've seen you talking about not trying to make a general-purpose image editor out of darktable, but these features just seem to crawl in no matter what, you are even considering adding a Liquify-like tool made by a contributor. Would you say that your project vision has substantially changed in the past? How would you define it now?

Tobias: I don't see too many general image manipulation features creeping in. We have masks since a while, and the liquify/warping thing would be another one, but besides that I don't see anything. There is also the question where to draw the line. Is everything besides global filters (exposure, levels, ...) already a step towards a general purpose editor? Are masks the line being crossed? I don't know for sure, but for me it's mostly pushing individual pixels, working with layers, merging several images. We do none of those and I hope we never will.

Johannes: I think this is caused by how darktable is governed. It's very much driven by the needs of individual developers, and we're very open when it comes to accepting the work of motivated contributors. we have a large dev basis, so I guess it was just a matter of time until someone felt the need for this or that and just went ahead and implemented it. I guess you could say we weren't consequent enough in rejecting patches, but so far I don't think this strategy has hurt us much. To the contrary, it helps to foster a large community of motivated developers.

HDR merging does exist though, and there's even a feature request to add manual/automatic alignment. And both duplication and configurable blending of processing modules are a lot like working with layers, even though the processing pipeline is fixed.

Tobias: Yes, but that doesn't counter my point: Editing single pixels is out of context, general calculations like that fit.

Johannes: To give a very specific answer to this very specific question: the HDR merging works on pre-demosaic raw data (which is why we have it, it's substantially simpler than/different to other tools except Wenzel's hdrmerge which came after IIRC). So automatic alignment is not possible (or even manual for that matter).

exposure adjusted in darktable

Have you already defined any major milestones for future development?

Tobias: No. Version 2.0 had the predefined milestone "Gtk+3 port", but that was an exception. Normally we start working on things we like, new features pile up and at some point we say "hey, that looks cool already, and we didn't have a release for a while, let's stabilize and get this to the users". There is a lot less planning involved than many might think.

Roman: As Tobias said, there are rarely pre-defined milestones. It is more like, someone has some cool idea, or needs some functionality that is not there yet, and he has time to implement it.

Personally, I have been working on image operation for highlight reconstruction via inpainting. There are several of them already in darktable, but frankly, currently that is the one of important features that are still not completely handled by darktable.

There has been a lot of preparatory work under-the-hood over the last two releases, which now opened possibility for some interesting things, say native support for Magic Lantern's Dual ISO, or new version of our profiled denoise image operation.

I'm also looking into adding yet another process() function to image operations, that would not use any intrinsic instructions, but OpenMP SIMD only, and thus, making darktable to not have any hard dependency on x86 processors, i.e. it could work on ARM64 too.

Jeremy: I would like to add the manipulation of actual image parameters to Lua, that is a big chunk of work. Apart from that it will mainly depend on what people do/want to do.

What kind of impact on users' workflows do you think the adding of Lua scripting has done so far? What are the most interesting things you've seen people do with Lua scripting in darktable?

Tobias: Good question. We slowly added Lua support since 1.4, but only now we start to get to a point where more advanced features can be done. In the future I can see quite some fancy scripts being written that people can just use instead of everyone coding the same helpers over and over again. That's also the motivation for our Lua scripts repository on GitHub. While there are some official scripts, i.e., mostly written and maintained by Jeremy and me, we want them to be seen as an extension to the Lua documentation, so that others can get ideas how to use our Lua API.

The results of that can be seen in the 'contrib' directory. The examples there range from background music for darktable's slideshows to a hook that uses 'mencoder' to assemble timelapses. We hope to see many more contributions in the future.

Jeremy: Lua was added mainly for users that have a specific workflow that goes against the most common workflow. Darktable will follow the most common workflow, but Lua will allow other users to adapt DT to their specific need.

That being said, I agree with Tobias that Lua in 1.6 was still missing some bricks to make it really useful. Without the possibility to add widgets (buttons, sliders etc.) to darktable, it was impossible to make a script that was really useable without technical knowledge.

With the Lua repository and the possibility to find widgets, things should go crazy really fast. Did you know that you can remote-control darktable via d-bus by sending Lua commands?

white balance adjusted in darktable

In early days of darktable quite a few features (e.g. wavelet-based) came directly from papers published at SIGGRAPH etc. What's your relationship with the academic world these days?

Tobias: We didn't add many new image operations recently, and those that got added were mostly not that sophisticated that we had to take the ideas from papers. That doesn't mean that our link to the academic world was dropped, Johannes is still working as a researcher in university, and when new papers come out we might think about implementing something new, too.

Johannes: Yes, as Tobias says. But then again graphics research is my profession, and darktable is for fun. No, seriously, the last few siggraphs didn't have any papers that seemed a good fit for implementation in darktable to me.

Several years ago you switched to rawspeed library by Klaus Post from the Rawstudio project. Now it looks like darktable is the primary "user" of rawspeed, and your own Pedro Côrte-Real is 2nd most active contributor to the library. Doesn't it feel at least a tiny bit weird? ;)

Tobias: I think it's a great example of how open source software can benefit from each other. I'm not sure if that's weird or just a bit funny.

How has your relationship with the Magic Lantern project been evolving, given the deflicker feature etc.?

Tobias: The deflicker code wasn't so much contributed by the Magic Lantern folks but written by Roman with inspiration from how magic lantern does it. I don't know if he used any code from them, maybe he can clarify. Apart from deflicker there are also plans to support their dual-iso feature natively.

Roman: The only direct contribution from Magic Lantern project was the highlight reconstruction algorithm that made it into v1.6. The deflicker was implemented by me, as it usually happens, after I needed a way to auto-expose lots of images, and found no way to do it. That being said, it uses exactly the same math as deflick.mo does.

Tobias: Even that was not taking code from them. Jo wrote it after talking with Alex at LGM.

Johannes: But it was most inspiring meeting those folks in person. And yes, I was a lazy ass implementing this dual-iso support natively in darktable ever since LGM.

Darktable seems to be doing pretty well without any kind of community funding which is all the rage these days. What do you think are the causes to that effect?

Tobias: Well, we'd need some legal entity that takes care of taxes. And to be honest, we don't need that much money. Our server is sponsored by a nice guy and there are no other expenses. Instead we have been asking our users to donate to LGM for several years now and from what we can see that helped a lot.

As for why we have been doing so well, no idea. Maybe because we are doing what we want without caring if anyone would like it. To the best of our knowledge darktable has exactly 17 users (that number is measured with the scientific method of pulling it out of thin air), so whatever we do, we can lose at most those few. Nothing to worry about.


The new version of darktable is available as source code and a .dmg for Mac OS X. Builds for various Linux distributions have either already landed or are pending.

Krita To Kickstart New Text And Vector Tools

$
0
0

Krita Foundation announced their third Kickstarter project to fund development of new text and vector tools. With the proposed features, the team aims to improve the user experience for, among others, comic book and webcomic artists.

Essentially, the team will ditch the Text tool inherited from Calligra Suite and create an easier-to-use UI for managing text and its styling, improve RTL and complex scripts support (think CJK, Devanagari), add text on path editing, non-destructive bending and distortion of text items etc.

Additionally, they will completely switch to SVG as an internal storage format for vector graphics and improve usability of related editing tools.

There are also 24 stretch goals: from composition guides to reference image docker improvements to LUT baking. In all likeliness we are going to see at least some of the stretch goals done: it was the case for both past Kickstarter campaigns, and after the first two days this new campaign is already ca. 30% funded.

As usual, LGW asked project leader Boudewijn Rempt some technical questions about the development plans within the campaign.

Given the focus on text and vector tools, how many bits of Calligra Suite does Krita still share with the original project?

There is nothing shared anymore: the libraries that we used to share have been forked, so Calligra and Krita have separate and, by now, very different versions of those libraries. That was a really tough decision, but in the end we all realized that office and art applications are just too different.

So, we'll probably drop all the OpenDocument loading and saving code in favor of SVG, with just an OpenDocument to SVG converter for compatibility with old KRA files.

We'll implement a completely new text tool and drop the old text tools and its libraries. As for the vector tools, we'll keep most of that code, since it is already half-ported to SVG, but we'll rework the tools to work better in the context of Krita.

How far do you think Krita should go in terms of vector tools? I'm guessing, you wouldn't want duplicating Karbon/Inkscape. But importing/exporting (EPS, AI, PDF, CDR etc.), boolean operations on paths, masks and clipping paths, groups, and suchlike?

For import/export, only SVG. And the functionality we want to implement first is what's really important for artists: it must support the main thing, the raster art. So, things like vector based speech balloons for comics, or decorative borders for trading cards or some kinds of effects. Boolean ops on paths are really import for comic book frames, for instance.

Regarding text direction flow and OpenType features: how much do Qt and Harfbuzz provide for Krita already, and how much (and what exactly) do you need to write from scratch?

Qt's text layout is a bit limited, it doesn't do top-to-bottom for Japanese, for instance. So likely we'll have to write our own layout engine, but we'll be using harfbuzz for the glyph shaper.

Do you think it's faster/easier to write and maintain your own engine than to patch Qt?

Well, they serve different purposes: Qt's layout engine is general purpose and mostly meant for things like the text editor widget or QML labels. We want things like automatic semi-random font substitution that places glyps from different fonts so we can have a better imitation of hand-lettered text, for instance. How far we'll be able to this is a bit of an adventure!

Some specifics of the proposed implementation make it look like you would slightly extend SVG. Is that correct?

Well, first we'll look at what SVG2 proposes and see if that's enough, then we'll check what Inkscape is doing, and if we still need more flexibility, we'll start working on extending SVG with our own namespace.

For vectors, I don't think that will be necessary, but it might be necessary for text. If the kickstarter gets funded, I suspect I'll be mailing Tavmjong Bah a lot!

Stretch goals cover all aspects of Krita: composition, game art, deep painting, general workflow improvements. How did you compile the list?

This January, we had a sprint in Deventer with some developers and some artists (Dmitry, me, beelzy, wolthera), where we went through all the wish bugs and feature requests and classified them. That gave us a big list of wishes of stretch goal size. Then later on, Timothée, Wolthera, Irina, and me sat down and compiled a list that felt balanced: some things that almost made it last years, some new things, bigger things, smaller things, something for every user.

One of the stretch goals is audio import for animation sequences. How far are you willing to go there? Just the basics, or do you see things like lipsync happen in the future?

Just the basics: we discussed this with the animators in our community, and lipsyncing just isn't that much of a priority for them. It's more having the music and the movement next to each other.

But that suggests multiple audio objects on the timeline, or would it be just a single track preprocessed in something like Ardour?

For now, a single track!

Is SVG 2 really on life support?

$
0
0

Between SVG 1.1 W3C Recommendation and SVG 2 in its current form, people have raised kids and sent them off to the college. And yet SVG 2 might arrive sometime in the future without quite a few useful features that have been already developed and tested. What's up with that?

During the Inkscape's board meeting, Tavmjong Bah shared a write-up on the status of SVG 2 based on recent happenings around the SVG Charter. While we encourage you to read it in its entirety, for the record, here is a quick summary:

  • Very few people actually contribute to the evolution of the SVG specification; entire companies dropped off de facto or are about to drop off the charter de jure.
  • There are not enough implementations of SVG 2 to test proposed new features.
  • So there is not enough content using those features to justify implementation in browsers.
  • Therefore browser vendors are not spending their resources on adding those features.
  • Which means features like gradient meshes and hatches would be axed from SVG 2 and moved to Web Incubator Community Group (WICG).
  • All in all, there is a substanical possibility that SVG Working Group charter will not be renewed.

For Inkscape users this means that a handful of new features in upcoming v0.92 may become unsupported in SVG 2.

This topic clearly involves multiple parties. We are starting with Tavmjong Bah (Inkscape developer, invited expert in the SVG working group) and hope to hear from browser vendors and other charter members to get the full picture.

Tavmjong, how did the SVG WG end up in this situation with regards to SVG 2? Is it "death by committee"?

SVG is a huge specification with a small group working on it. Some of the most active members focused on joint CSS/SVG specs like Compositing and Blending, Transforms, Filters, Masking and Clipping. There wasn't much time left over for working on SVG 2 directly but still we plugged away. Most of the work was on things of little interest to Inkscape but of importance to browsers (like the DOM interface).

The SVG 2 group had their differences, but nothing major (being very pragmatic about things like SVG Fonts). Two major browser vendors provided the co-chairs until the past year. One co-chair was laid off (from Opera) and another changed positions (from Mozilla) and no longer works on SVG. Neither were replaced.

You mention that "there seems to be a disconnect between the browser vendors and the content creators. Even Adobe has expressed frustrations with the current status…". Would you say that SVG prior to v2 as we know it now lacked certain features that appealed to content creators? Or are there more significant factors at play here?

I'm not sure I understand your question. As I see it, from Inkscape's perspective SVG 2 is missing:

  • mesh gradients, supported in PostScript/PDF/Illustrator/etc. and very important to illustrators;
  • hatches, supported by any CAD program and important for technical drawings and for people who use SVG as input for engravers, embroidery, and plotters;
  • solid colors which are part of SVG 1.2 and a far better way to handle "swatches" than using single stop gradients;
  • and of course, text in a shape.

There are a lot of other improvements in SVG 2 such as enabling the automatic matching of an arrow head fill color to the path's stroke color, the paint-order property, and better closing path syntax.

What are the exact implications of moving all new features to WICG? Who would be working on new features in WICG given how little participation there is at the moment?

The idea of WICG is that community (i.e. not the browser vendors) develop new ideas that the browser vendors can then have a look at and say yea or nay.

I don't see this as a viable alternative for several reasons including lack of browser buy-in. And it is also not appropriate for SVG 2 features since they have already been developed and tested.

To me it looks like SVG is pretty much at the mercy of browser vendors who don't exactly contribute to its evolution.

Yes. It wasn't always that way. In the early days there were many different SVG renderers so one could find multiple implementations of most features. It was sufficient for something to remain in the spec, if there were two independent implementations.

The idea was that eventually all browsers would support all features if not right away. Of course it took IE many years to support SVG.

Now with just a handful of renderers in browsers it becomes harder to find two implementations, and even if there are two, it does not guarantee something staying in the spec. If one browser comes out adamantly against something, then it gets removed (e.g. SVG fonts, <tref> etc.).

Would you say that the existing process gives the specification developers a fair chance to get new useful features included into the final recommendation?

If you had asked me three months ago I would have said yes. Now I would say no.

From what I can see in telecon logs, it looks like browser vendors are OK with new features as long as implementations exist, and accessibility is taken care of.

Inkscape has fully working rendering implementations of meshes (for four years) and hatches (for two or three years). So existing proof-of-implementability is not enough. Accessibility in this context is not an issue.

If the charter isn't renewed, who will be there to sort out whatever comes out of the incubator?

The CSS working group? Or nobody. By the way, The HTML canvas element is suffering the same problem in that there is no group to maintain it.

As a principal author of quite a few new features in SVG 2, do you have an idea of a solution to getting major incomplete and missing features done? (stroke positioning, pages etc. — all the recently discussed axed features and more).

I am afraid I don't see the browser vendors spending much effort on them given the discussion last month. The only spec that has had any interest from CSS is the strokes spec (they want to be able to stroke bounding boxes).

Notably, in your write-up, you mostly mention the charter, Inkscape, and browser vendors. What is your understanding of where SVG (1.1 and/or 2) currently stands with regards to authoring applications? Moreover, some features appear to lack the "second implementation". Would it be correct to assume that Inkscape is the only authoring app project actively involved with SVG currently?

Adobe Illustrator supports export to SVG 1.1. I don't know of any SVG 2 features they have implemented (since I don't use it). They did have very strong interest in the things that got removed from SVG and put into joint CSS/SVG specs, so I am sure they support those in their web design programs.

During WG telecon in early October this year, Dirk Schulze said this: "Adobe have a strong interest in getting more features — e.g. mesh gradients, stroke features, variable width stroke, etc which do not neccessarily need to live in the SVG WG, but we do want to see them proceed". So it looks like Inkscape's interests align with those of Adobe. Do you see this as a starting point for a conversation on getting the much required "second implementation" done?

It has been suggested that they may be an ally. Although I was also informed that they are frustrated with participating in the W3C specification process and have seriously cut back their participation (certainly we haven't seen their active participation in the SVG group for a couple of years).

Do you have any advice to users others than creating more content that uses proposed SVG 2 features?

There are three things that the browser vendors seem to be sensitive to:

  1. The opinions of major JavaScript library authors like D3.
  2. Large corporations.
  3. Use counters.

I'm not sure about 1). 2) won't help, although Boeing is a possible ally for hatches since they are converting their technical drawings to SVG. 3) is the easiest place for us to have an impact.

How do browser vendors assess use counters?

They add code to their browsers to track things. Here is Chrome's publicly visible use statistics: https://www.chromestatus.com/metrics/css/popularity.

Basing decisions off this has some serious flaws. It doesn't take into account the fact that something isn't used due to it not being cross-browser available. For example, SVG would have rated very low before implementation in IE.

Will Inkscape 0.92 expose SVG 2 features by default in team's own binary builds? What will be the team's recommendation to 3rd party distributors?

We should encourage them to enable them. Already, we have 'paint-order' and new filter blending modes in the GUI. Speaking of which, Firefox has never supported Inkscape's layer blending using filters and we've never heard complaints about it.

Inkscape hackfest planned for late June in Paris

$
0
0

Following productive hackfests in 2015 and 2016, the Inkscape team is meeting in Paris later this month for another hackfest. The event is taking place on June 27th through July 1st inside Paris's modern science museum, Cité des sciences et de l'industrie.

(Not quite) coincidentally, the venue is exactly where in 2008 part of the original documentation team met for the first time to work on the official user manual.

So far the hackfest agenda seems to cover many topics from the official roadmap for the next major update of Inkscape: GTK+3 port, coordinate system flip, making C++11 compiler a requirement, splitting less-maintained extensions into an extra package, improving performance. Which is another reminder that should the team stick to the plan, they will need all the help they can get to prepare the next release in a sensible amount of time.

The attending team members are core team developers like Tavmjong Bah, Martin Owens, and Jabier Arraiza, as well as contributors like C Rogers, Cédric Gemy, and Elisa de Castro Guerra. Apart from programming sessions there's a community meet-up planned for Saturday, July 1st.

The team is currently revamping the project's infrastructure. Most recently they moved to Gitlab for source code hosting and bug tracking, marking a departure from Canonical's Launchpad and Bazaar.

Document Liberation Project announces initial QuarkXPress support

$
0
0

The Document Liberation Project (DLP) announced the first release of libqxp, a library for reading QuarkXPress 3.3—4.1 documents. And this is one hell of a trip down the memory lane.

The initiative is a perfect fit for the project's agenda to implement support for as many legacy file formats as possible (see our earlier interview with Fridrich Strba et al.), although the timing is a bit of a puzzle.

History lessons

QuakXPress was once the king of desktop publishing, with reported 95% of the market share at its highest point. But corporate greed, overconfidence, and lack of vision pretty much killed it in early 2000s, and Adobe InDesign nailed its coffin.

A typical comment to the Ars article (linked above) on the subject looks like this:

We hated Quark, the program and the company. But of course we used it because it was ubiquitous. InDesign 1.0 wasn't great, but we were so desperate to move away from Quark that we slowly converted.

From many discussions on the web regarding Quark and Adobe it looks like QXP users mostly got their closure in 2003—2004, when Adobe's Creative Suite 2 arrived and settled in, although some sticked with Quark's software through v5 and v6.

Ever since Adobe introduced subscription-based model in 2013, there's a somewhat popular notion that Adobe is the new Quark and it's on the road to failure. However, after initial setback in 2013 and 2014, the company's financials have been steadily growing, in terms of both revenue, net profit, and net income. And since introduction of Creative Cloud in May 2013, Adobe's stock price is up by ca. 230%. So it looks like they need to try harder to fail.

Although Quark has been trying to bring back the former market share by any means deemed necessary, they haven't been very successful. The company eventually refocused on automating content creation, management, publishing, and delivery. There are very few businesses around that still run once popular QuarkXPress, let alone the versions from 15—20 years ago which DLP focused on. Which brings us back to the actual topic at hand.

What's in libqxp 0.0.0

The newly released first version of the library is the result of several months of work by Aleksas Pantechovskis, a student from Lithuania, who participated in the Google Summer of Code program this year (again).

Aleksas already had good track record with the Document Liberation Project. Last year, he wrote libzmf, a library for importing Zoner Callisto/Draw v4 and v5 documents.

In this initial release the libqxp library reads:

  • pages and facing pages;
  • boxes (rectangles, ellipses, Bezier);
  • lines, Bezier curves;
  • text objects, including linked text boxes and text on path;
  • font, font faces, size, alignment, paragraph rules, leading, tabs, underline, outline, shadow, subscript, superscript, caps etc.;
  • colors (including shades), gradients (linear, radial, rectangular);
  • line/frame color, width, line caps and corners, arrows, dashes;
  • object groups;
  • rotation.

Some rather important features like custom kerning and tracking aren't yet supported, because OpenDocument file format doesn't support those. But that's not much of an issue, according to Aleksas:

librevenge is just interfaces, so if there is another output generation lib instead of libodfgen for format that supports them, then it can use any attributes passed to it.

One big missing part in this release is support for image objects, because, Aleksas says, the picture format seems to be quite complicated.

Development of libqxp sits on top of reverse-engineering work started by Valek Filippov in OLE Toy in 2013 and continued by David Tardon and Aleksas in February 2017. Although libqxp sticks to ancient versions of QuarkXPress for now, OLE Toy can parse some of the data in QXP v6 and v8 (it's encrypted since v5), so this might change in the future.

LibreOffice has already been patched to open QXP files, this feature will be available in v6.0 (expected in early 2018). The library itself ships with the usual SVG converter which you are likely to find of limited use. Also, if all you need is extracting text, there's a perfectly sensible qxp2text converter as well.

Support in Scribus

One would rightfully expect Scribus to be the primary beneficiary from libqxp. But here is some background info.

First of all, the history between Quark and Scribus is rather hairy.

Initially, Scribus was pretty much modeled after QuarkXPress, and the two projects still share some similarities. Early in the history of Scribus, it made a lot of sense to introduce support for QXP files. Users got mad with Quark's continuous quirks and bad user support, they would jump ship at the very next opportunity.

Paul Johnson, former Scribus contributor, actually started working on support for QXP files in 2004. But after he had posted to a public mailing list about his progress, he reportedly received a cease and desist letter from Quark.

Scribus was nowhere near its current fame at the time, and even now it would not be able to handle legal expenses (save for a theoretical FSF intervention). Back then Paul just stopped working on that project.

Quark didn't quit monitoring Scribus though and continued tracking the progress of the project to the point where developers jokingly discussed blocking Quark's IP addresses range from accessing Scribus's source code repository (they reportedly had logs of that). Eventually Quark turned their attention towards more pressing matters like losing their market share to Adobe.

Today, much like LibreOffice, Scribus supports both ubiquitous file formats like IDML and bizarre ones like those by Calamus and Viva Designer. It even has support for Quark's XTG files. Getting a QXP importer would also perfectly fit Scribus's narrative.

The team is well aware of the libqxp project, they already have experience writing librevenge-based importers for Corel DRAW, Microsoft Publisher, Macromedia FreeHand etc. So it's likely just a matter of time till they introduce QuarkXPress importer.

Is there any closure left to get?

Valentina Fork Settles Down As Seamly2D, Valentina Goes On

$
0
0

Four months into a bizarre fork of Valentina, free pattern-making software for fashion designers, Susan Spencer's leg of the fork finally gets rebranded as Seamly2D.

The are now two projects that share the proverbial 99% of code base: 1) original Valentina project, forked by its founder Roman Telezhinsky, 2) Seamly2D, managed by Valentina's other founder, Susan Spencer. But let's roll it back a bit.

The Story

The project was started by Roman Telezhinsky (Ukraine) and Susan Spencer (USA) in 2013. Both founders had previous attempts at writing software for pattern-as-in-clothes design. However, within the Valentina project, Roman took the role of writing the code, while Susan quickly gravitated towards community building, PR, handling financials (paying Roman's salary, in fact) etc.

Early on, Roman took position that basically boils down to this (opennet.ru, 2013):

I work on this project for myself. If anybody else needs it—great. If nobody else needs it, it's fine as well.

Depending of where you are coming from, this either contradicts or complements his more official statement (Valentina blog, 2013):

It's clear that a single person cannot realistically create such a program. So I made it an open project, hoping that I won't be the only one interested in it. I hope it doesn't stop at that.

Despite this rather blunt classic approach to publishing software under terms of GPL, users soon started gathering around the Valentina project. The two main reasons for that were technical excellence of the software (despite a lot of rough edges) and solid community work.

The former can be explained by introduction of parametric design to software for end-users, which greatly simplified making adjustments, as well as refitting an existing design to a completely different person.

Moreover, with over 50 pattern-making systems supported, this made the project somewhat popular with designers of contemporary clothes as well as the historical recreation community, since a significant part of the supported systems cover Victorian tailoring, as well as garment cutting from even earlier centuries.

There's something else that should be factored in to explain public's interest in Valentina/Seamly2D. Pattern-making software is mostly proprietary and very expensive even for personal use. Top-notch systems like Gerber AccuMark and Lectra Fashion PLM are targeted at large companies and are in the general arm/leg/kidney ballpark price-wise. If you know exactly how much either of them costs, congratulations—you are an owner of a large fashion business with hundreds of employees.

Less expensive options typically start around $1,000. Some cheaper (and simplistic) solutions exist, and even then vendors would try to charge you for every single extra feature.

And, to the best of our knowledge, none of the above have native Linux versions. Needless to say, none of them is free-as-in-speech.

A user who commented on sodaCAD blog back in 2014 pretty much nailed it:

I've been in the pattern making industry for over 20 years and we REALLY NEED a free/cheap/open solution. It's almost impossible to hire skilled operators in New Zealand simply because nobody can afford to buy the software and get skilled up in it.

That's why breaking the Valentina team in two was dangerous, if inevitable. But this is not the usual "a couple of programmers had a technical argument". Digging into the story of the conflict between the founders has been an exceptional, if frightful source of insights into the world of Things That Can Go Wrong On So Many Levels.

  • Language barrier? Check.
  • Mutual misunderstandings and apparent lack of persistence to clear things up? Check.
  • Huge project vision clashes? Check.
  • Being borderline rude to potential contributors? Check.
  • Alleged locking one founder out of direct communication to potential partners by another founder? Check.
  • Social awkwardness of one founder enabled by the tendency of the other founder to sweep the dust under the rug? Check.

Arguably, so far the most sensible comment on the whole situation comes from Mario Behling who, at some point in the past, unsuccessfully tried bringing Roman to live in Berlin and work on the project in a hackerspace:

In my opinion they should just calmly do their own things and let it be. I think their worlds are just too far apart.

It's hard to tell how calm they can get. In his most recent post, Roman summarizes his vision of working with a community and uses what one might call "brutal honesty". The statements go well into the uneasy territory, breaking almost every rule of contemporary community management. If anything, they hint at exactly how difficult it could be working with him for other contributors—something he readily admits in both private conversations and earlier public posts.

And then what?

We could leave it at that, was it not for the fact that four months into the fork, the amount of confusion about the two separated projects is still staggering. Not in the least place because it's caused by actual stakeholders.

Case in point. A few weeks ago, Susan Spencer launched the Fashion Freedom Initiative (FFI) which is:

...an open community of indie designers, forward thinking businesses, artisan producers, makers, crafters, hackers and doers. We are working together to build and run our own, independent chains for global fashion production.

The initiative seems like an interesting approach to solving quite a few things that are wrong with the fashion industry. The founders appear to rely on Seamly2D as its strongest community-building tool. So it's expected that the project started posting user stories.

The first such story, a Seamly2D testimonial by Megan Rhinehart, founder of Zuit, is a great inspirational read, save for several statements she made.

One of the things I love about Seamly2D is that it is getting translated into so many languages.

It's not. The Transifex account that Susan Spencer keeps pointing users to is owned by Roman Telezhinsky. They are not translating Seamly2D. They are translating Valentina and probably don't even know that.

Moreover, she couldn't be using Seamly2D, unless it was a private build from Git master made within the last couple of weeks. There are simply no builds of Seamly2D to download yet, nor have there been releases of Seamly2D. The 0.6.0.1 release was made a full month prior to the final rebranding. Susan Spencer got the valentina-project.org domain name and the website as part of the separation deal. The downloads section of the website still distributes Valentina builds. It even says "Valentina" right on the front page, next to "Seamly2D".

[Seamly2D] is cloud-based so I can see what the tailor sees. I could potentially add users to help with pattern design and quality control.

Seamly2D is not cloud-based, nor is Valentina. It's a Qt/C++ desktop application that has to be downloaded and installed. When asked for clarification, Ms. Rhinehart replied that there was "a third party app to run it on the cloud" involved. As of December 7, the testimonial retains the original, unedited statement.

It also doesn't help that Seamly2D has two simultaneously maintained GitHub repositories (more on that later). Some of that confusion can be explained by the fact that the separation agreement was made hastily, angry conversations went on for a while, and there were no clean cuts.

Present State of Affairs

In terms of writing actual code, this is what things look like at the moment.

Roman more or less maintains the programming pace, fixing bugs, making various enhancements, writing new features, and publishing test builds. August through October was a busy time for him, less so for November, and he expects December to be a slow month for the project as well.

Code-wise, Seamly2D isn't as efficient so far. Currently, the project confusingly operates on two GitHub repositories:

  1. The one with rebranded repo name and all (or most) converted issues from the Bitbucket tracker, yet without latest changes: https://github.com/fashionfreedom/seamly2d.
  2. The one with the old name and yet with all recent changes, including rebranding in the source code and visuals: https://github.com/valentina-project/vpo2

In fact, since August, changes in what is now Seamly2D code base boil down to rebranding, updating/fixing the build system and setting up automatic builds on a new account, updating various build/contribution related docs, renaming icons, and improvements in generating tiled PDF files. That is, the vast majority of changes doesn't fix bugs or introduce new features.

During a conversation on September 11, Susan Spencer stated:

Since Roman left, we've received offers to contribute from four programmers. They are waiting on the issues list to be recreated.

This is an important part, because the alleged pushing away of contributors by Roman was one of the biggest concerns mentioned by Susan.

However, three weeks after this step was completed, source code changes still weren't pouring into the repository. We asked Mrs. Spencer for an insight on that, and then a weird thing happened:

  1. She changed the narrative into what boils down to "we do have programmers, but they are currently unavailable".
  2. She then provided a rather believable explanation for each "missing programmer" case, without naming anyone or giving away too many details in order to protect the privacy of the alleged future contributors.
  3. Following that, she mentioned another technical detail about all of them that, if published, would raise questions about possibility of actual programming to be done in the project.
  4. Finally, she specifically forbid publicly mentioning specific information she provided out of "fear that ... there could be a big questionmark on our community" within this article.

Instead, Mrs. Spencer provided this statement:

I would like for the take-away from all this to be that our all-volunteer community is handling the situation rather well. They are an open, honest, and upstanding group of nice people who care about each other and about the project. I'm quite proud of them.

All in all, Susan Spencer seems genuinely defensive of the community she helped growing, although in this particular case this leads to questionable PR tactics.

Aftermath

It would be extremely easy to blame either side for what's going on with both projects currently. However even from what's left unmoderated in the forum it's clear that there has been a lot of mutual hostility, but above all—lack of understanding coming from both founders and community members. Some of it continues to pour out one way or another.

Maintainers of both Seamly2D and Valentina emphasize that their projects are doing well. However the former has been mostly lacking visibly active developers since day one, and the latter doesn't get nearly as much community awareness as before.

In the coming months/years, we are likely to see for ourselves, whether a community/PR manager can build a team of developers, and whether a developer can succeed in building a strong dedicated community.

If you ask, which project you should be tracking from now on, the best we can get you is "both, if you can stand occasional passive aggression outbreaks and nasty remarks". Nobody actually promised that free software would be a peaceful ecosystem. But it will get better.

DIGImend Project Revived To Improve Non-Wacom Tablets Support on Linux

$
0
0

Nikolai Kondrashov rebooted the DIGImend project that brings support for Genius, Huion, Yiynova, and other non-Wacom graphic tablets to Linux users.

After 9 years of working on DIGImend for free and 1 year of hiatus, Nikolai is now relying on both corporate support and recurring donations via Patreon to fund his work on the project.

Don't underestimate his statement that with $1300 per month (pre-tax) he would dedicate mere two hours to the project code each weekend (or buy tablets to hack on). Judging by live hacking sessions he broadcasts on YouTube, two hours get a lot of work done.

Earlier this year he already added support for Ugee's M540 and EX07 tablets, and several days ago support for Ugee 2150 tablet landed. In a thread on Google+ (yes, it's still a thing) he admitted he would also be interested to work on advanced configuration for such tablets in GNOME.

Visit Nikolai's Patreon page for more information.


Introducing libresvg, a rival to librsvg and QtSvg

$
0
0

Evgeniy Raizner announced the first public release of libresvg. This new SVG rendering library aims to replace librsvg and QtSvg, as well as become alternative for using Inkscape as an SVG to PNG converter.

In the community, Evgeniy is mostly known for SVG Cleaner, a very useful tool for making SVG files a lot smaller by removing all the cruft such as unused and invisible elements. He started libresvg about a year ago and has been working on the first release ever since. Today, libresvg v0.1 supports a subset of SVG Full 1.1 without a number of elements (more on that later).

The reason why libresvg exists is that Evgeniy is quite unhappy with existing options such as librsvg (further as rsvg) and QtSvg (SVG Cleaner has a Qt GUI). He claims that the former has serious architectural issues, plethora of parser bugs, and is difficult to ship on platforms other than Linux (being hardwired to Cairo and glib).

Comparison between various SVG renderers

At the same time, QtSvg has a rather incomplete support for SVG elements.

Design specifics

While libresvg is written in Rust, and librsvg is being ported to Rust as well, there are some technical differences between the two that Evgeniy outlined in both his original post at linux.org.ru and a private email exchange. They mostly boil down to how he tries to avoid things he sees as architectural imperfections of rsvg.

First of all, libresvg is designed differently. It parses an SVG document into DOM, does some preprocessing such as cruft removal and markup normalization, then constructs a simplified DOM that contains commands for the rendering backend. Parsing and other steps are done with his own toolchain (xmlparser, svgparser, svgdom) compiled into a single binary file that is a command-line converter.

With libresvg, preprocessing only happens once (Evgeniy claims that it doesn’t seem to be the case for rsvg when rendering to canvas), then 99% of the rendering time is spent on the Cairo/Qt side. Which also means smaller CPU footprint for the library.

So yes, there’s that too: libresvg supports multiple drawing backends. Qt and Cairo are already done, Skia is on the roadmap.

How much of SVG is supported

As of v0.1, libresvg surpasses QtSvg in terms of SVG compliance, but needs to gain support for more SVG elements to be on par with rsvg. The  support for animations, scripting, and SVG fonts is not planned.

SVG compliance test chart

When compared to rsvg, this is what libresvg v0.1 looks like:

  • Libresvg doesn’t yet support filters, clipping paths, masks, markers, and patterns (which rsvg does support to an extent).
  • Libresvg has a complete support for gradient fills, while rsvg cannot inherit attributes and validate them, nor can it read single-stop gradients (swatches, typical for SVG documents produced with Inkscape).
  • Libresvg has better support for text rendering: librsvg doesn’t read xml:space and text-decoration, it also doesn’t always render multiline text correctly and doesn’t support tspan very well.
  • Libresvg has better, though still incomplete support for CSS 2.

Evgeniy is currently hesitant to start working on SVG 2 support as the spec isn’t completed yet, nor has there been decision on what new features will make it to the W3C recommendation.

Further work

One last important thing is that support for sprites is currently planned for v0.2. So if you expected to start using libresvg instead of Inkscape to convert master SVG documents (e.g. all icons in a single SVG file) to multiple PNG files, you’ll have to wait a bit. The developer will have to implement transferring element IDs from the original document to simplified DOM first.

Evgeniy doesn’t yet use his new library in SVG Cleaner, but that’s temporarily. He says he might return to this after releasing libresvg v0.3.

Source code of libresvg and the involved toolchain is available on GitHub. At some point in the future, the project will probably be renamed for fairly obvious reasons. Evgeniy accepts ideas on that.

How to view reference images in GIMP

$
0
0

Showing reference images for painting is a somewhat common feature request by GIMP users. While a specifically designed solution surely wouldn't come amiss, there a simple way to work around this. Here is how you can do it with pretty much any version of GIMP from at least the past 10+ years.

Viewing reference images

Let's take the default setup of GIMP (2.8 or 2.9 at your preference) with single-window mode enabled. Create an image where you will be painting, and then open an image that will be your reference.

Default GIMP windows layout

Use Windows > Dockable Dialogs > Navigation to open the Navigation window:

Open the Navigation window

By default, it will be added to the sidebar:

Navigation window opened

Now grab its header and drag it outside the sidebar:

Drag navigation dialog out

Once it's not docked anymore, it has a new option in its own menu: Show Image Selection (it's been there since the time when dinosaurs ruled the world). Enable this option by clicking the triangle button (top right corner, below the Auto button) to open window's menu.

Enable Show Image Selection

Now you have a drop-down list of currently opened images and an Auto button. The button is enabled by default so that the Navigation window would follow currently opened image. Click it to disable autofollowing of images, then choose your reference image in the drop-down list.

Select image reference

Then you can resize the Navigation to your liking and start painting. If the Navigation window doesn't stay on top (depends on operating system and window managers), one way to fix this is to go to Edit > Preferences > Window Management and choose Keep Above for Window Manager Hints.

Workaround limitations

There are several limitation with this workaround. First of all, it only works when the Navigation window is floating. It means that it inevitably overlaps part of your canvas, so it would be desirable to have this image selection drop-down list when the Navigation window is docked in the sidebar.

Secondly, since the Navigation window wasn't designed for this purpose, you can't zoom and pan your reference image.

And finally, once you use the Navigation window to view your reference image, you lose the ability to use it to pan and zoom on your painting. If this is how you usually pan images, there is a workaround for this.

When the scrollbars are enabled, their intersection in the lower right corner of the canvas has its own hidden navigation widget with the same arrowhead icon. Just click the arrowhead and start panning.

The widget for panning

There are, however, other ways to pan and zoom:

  • press mouse wheel and drag around to pan
  • press Space and move your mouse
  • use Ctrl + mouse wheel up/down (viewport will center around the mouse pointer)
  • use shortcuts to switch to preset zoom levels (View > Zoom will give you idea)

You can also remap any shortcuts in GIMP and even customize it to use mouse wheel scroll to zoom in/out without pressing Ctrl: go to Edit > Preferences > Input Controllers, then edit Main Mouse Wheel controller settings.

2018 in perspective

$
0
0

It's arguable, but by now, it's pretty safe to say that the proverbial year of Linux on the desktop is never happening. But... do we really need it so much? Especially if there an impressive lineup of upcoming libre software releases set for 2018? Let's see what this year is bringing us.

FreeCAD 0.17

Over the past 1.5 years since v0.16 release, FreeCAD has gained a huge amount of changes: massive updates in the PartDesign and Path workbenches, composite solids now possible in the Part workbench thanks to upgrading to newer OpenCascade kernel, improved BIM workflow for architects, new Spreadsheet workbench for importing Excel data, and new TechDraw workbench for creating technical drawings.

The Arch workbench in particular now features new presets to build precast concrete elements,as well as tools for designing rebars and a plumbing system.

Unfortunately, FreeCAD 0.17 won't be shipped with any Assembly workbench, as available solutions are still experimental, and the focus seems to have shifted from Assembly2 to Assembly3. There are, however, builds of FreeCAD + Assembly3 on GitHub.

Since last year, FreeCAD also has a GitHub repository that unifies the most interesting workbenches/add-ons. it's very much worth checking out.

The FreeCAD team recently announced feature freeze and is actively encouraging translators to update localization files, while developers deal with bug reports.

Release notes are still work in progress, and you can learn a lot more about changes in the Arch workbench from Yorik's blog.

Preliminary builds of FreeCAD 0.17 are available on GitHub as well. If you are interested in providing feedback, this forum thread is for you.

Blender 2.80

Originally proposed to be released "somewhere in 2016", Blender 2.80 now seems complete enough to land somewhere in 2018.

Real-time PBR in the viewport, asset management, grease pencil improvements, complete overhaul of layers and dependency graph, UI cleanup... Blender 2.80 has the makings of a huge update that will indeed immensily improve the workflow.

The Blender team does an excellent job promoting new features in the upcoming major update. There's a dedicated page that serves as intermediate release notes for v2.80. Moreover, Ton Roosendaal recently posted a great overview of new stuff expected this year. Do check it out!

Krita 4.0

Two of the major new features in upcoming Krita 4.0, vector graphics and text, were subject of the 2016 campaign on Kickstarter. For vector graphics, here is an overview from Jeremy Bullock, one of the leading OpenToonz contributors:

With text, the idea was to simplify adding speech ballons and suchlike in comics. That's a somewhat specialized use of the text tool, although adding generic captions works just fine. The team also notes that due to all the troubles with the tax office they had to limit the feature set for 4.0 (at least for the text tool), aiming to enhance it in further updates (better OpenType support, more control over glyphs etc.).

As usual, you should expect many improvements in the painting tools: various user experience improvements, possibility to use brushes larged than 1,000px, better performance by realying on multh-threading. For painting-related changes overview, see this video:

The team also published preliminary released notes well worth reading. Currently, there is a beta of 4.0 available for downloading.

GIMP 2.10

After almost 6 years of work, the GIMP team is finalizing the next big update. The plan is to cut a beta of v2.10 once the amount of critical bugs falls further down: it's currently stuck at 20, as new bugs get promoted to blockers, while old blockers get fixed. It's a bit of an uphill battle.

GIMP 2.10

The team initially intended v2.10 to be more or less a GEGL-based upgrade of v2.8 plus high bit depth precision support. Needless to say, the plan hasn't exactly worked: there will be a lot more than that.

In fact, even now, when only critical bugs are supposed to be worked on, the team cannot resist making improvements that aren't blocking the release. Just last night, Ell implemented masks for layer groups and updated the PSD plug-in accordingly.

So v2.10 is arriving later in 2018 with features including, but not limited to:

  • Processing with 16-/32-bit per color channel precision
  • Loading/exporting 16-bit PNG, 16/32-bit TIFF/PSD/EXR, 16/32/64-bit FITS files
  • Color management rewritten as a core feature, with all color widgets now color-managed
  • 10+ new blend modes: Pass-Through, Linear Burn, Vivid Light, Linear Light etc.
  • 80+ GEGL-based filters, with on-canvas preview
  • New and improved transformation, selection, and painting tools
  • Canvas rotation/flipping
  • Initial multi-threaded image processing

So far the community's response to finalization of 2.10 seems to be mixed. A lot of people feel that the release is too long overdue (and developers readily admit that). Hence the decision to relax the release policy and allow new features in stable branches (when possible). This way, contributions will get to end-users a lot faster.

RawTherapee 5.4 and darktable 2.6

Quite a few free software users are torn between RawTherapee and darktable. Both are very solid digital photography applications with an overlapping feature set, yet different approach to the processing workflow and UI/UX.

Local contrast tool in RawTherapee 5.4

RawTherapee 5.4 is currently expected later this February. The release brings quite a few much welcome updates, some of which are:

  • New tools such as histogram matching, HDR Tone Mapping, and Local Contrast
  • New RCD demosaicing algorithm to minimize various artifacts
  • Out-of-gamut areas visualization
  • Creating and processing Sony Pixel Shift ARQ files
  • Saving 32-bit floating-point TIFF files, clamped to [0-1].
  • Lensfun-based chromatic aberration correction
  • Cleaner UI

But there is lot more going on. In a conversation, RT developer Morgan Hardwood told us:

We have been putting off a major refactoring and unification of the four existing pipelines (main image, thumbnail, etc.) into one. That work will begin now and should make a lot of new cool stuff possible, like on-canvas editing.

Naturally, there are no estimations of release dates beyond v5.4 at this point.

For darktable, it's hard to predict what's coming in the next major version. The team traditionally releases a major update around winter holidays time, so we are a mere month into the new development cycle.

There are, however, two new features that might make it to the next big update. The first one is a Filmulate plugin that reuses Filmulator technology to emulate film development.

The other one is a new Retouch tool that performs various operations such as healing on wavelet scales. The team wasn't originally fond of adding localized edits beyond spot removal to darktable. But they eventually gave in, when Liquify was submitted by a contributor (and it took quite a while to complete the feature). Releasing darktable with even more retouching tools could be... well, fun?

SVG2 to be finalized

In November 2016, we published an interview with Tavmjong Bah, Inkscape's core team developer responsible for introducing several artists-centered features to upcoming SVG2. During the conversation, he voiced his concerns about the possibility of terminating the working group and moving the specification to W3C's Web Platform Incubator Community Group (WICG) where its future would be rather uncertain.

The charter wasn't renewed in January 2017, but the project wasn't moved to WICG either. A new charter was announced in August with Microsoft's Bogdan Brinza (Principal PM Manager, Microsoft Edge) at the helm.

The WG was rechartered for the sole purpose of getting SVG2 unstuck and making it reach the Proposed Recommendation status which is scheduled for June 2018. Not quite coincidentally, this is when this WG will be disbanded again.

The Charter page is very specific about the focus of this charter period:

As a primary focus [...], the group will concentrate on the stabilisation and interoperability testing of the core SVG2 specification. As part of that testing, features which are in the reference draft of SVG2 and which do not meet the stability and interoperability requirements for a Proposed Recommendation may be moved to separate specification modules, work on which would remain in scope, but at a lower priority.

This is what the working group has been busy with ever since.

FreieFarbe/FreeColour is going DIN

In December 2017, FreieFarbe e.V. announced that their initiative for „Open Colour Communication“ standard was supported by DIN and will become a DIN SPEC (which is the first step towards DIN Norm). It is claimed that DIN intends to turn this into an international standard via ISO later.

The FreieFarbe / FreeColour initiative aims to provide an open alternative to Pantone, HKS, and other proprietary colour systems. They argue that unlike Pantone and some other proprietary manufacturers like RAL, FreiFarbe has an actual color system.

As part of the proposal to DIN, they submitted a prototype of a CIE LCH based color reference (printed by Proof.de), where colors are sorted by their hue, lightness, and chroma values in steps of 10, 5, and 10 respectively (hue would be in steps of 5 in the final version). Which is, in fact, quite similar (if not identical) to the color system of RAL Design.

The team has just published HLC Colour Atlas: a printed reference (A4, ring binder), a printed documentation in German and English, colour palettes with LAB values in ASE (Adobe), SBZ (SwatchBooker), and other file formats, a PDF master file of the atlas with layers for different output targets, a CxF3 file where color data is stored in spectral values.

The specification should be done by June 2018. Ink formulas might not make it to the spec, in which case FreieFarbe e.V. promises to publish them freely online.

Ardour 6.0

Although projects like LMMS, MusE, and Rosegarden haven't really gone anywhere and have their following, it does look like Ardour and Qtractor are the dominating digital audio workstations on Linux these days. Both projects have exemplary maintenance and get regular updates, although Ardour's release pace recently slipped for a good reason.

Ardour 6 alpha

Since mid-2017 or so, Ardour has been undergoing a completely boring procedure called refactoring and internal redesign. Hence Ardour 6, expected later this year, will feature mostly behind-the-scenes changes. Most of the work going into the next version so far is architectural (like proper handling of musical time), with one exception: cue monitoring.

At this point, it's hard to tell whether it's going to stay that way by the time v6.0 is finalized (after all, GIMP 2.10 was going to be mostly v2.8+GEGL, and we do know how this ended). That said, further 6.x releases are likely to gain what lead developer Paul Davis cautiously calls "some features to support a more "groove-centric" workflow".

It's not exactly a huge surprise that Paul has been interested in making Ardour more suitable for live performances for quite a long time. So we probably should be looking forward to something along the lines of advanced looping and sample stretching. Existing support for both Ableton Push 2 and NI Maschine 2 control surfaces would come in handy then.

So far, Ardour 6 looks like a summer-time release, but it's too early to tell.

More synths awesomeness

Last year, VCV Rack stormed into the softsynths scene as a free/libre software implementation of Eurorack/modular synths and became one of the most exciting projects in the Linux audio ecosystem.

VCV Rack is designed as a real modular synth, and there's an increasing amount of all sorts of modules available. And this thing is addictive as hell. We expect VCV Rack to keep rapidly growing this year.

In 2018, we are also likely to see further improvements of Zyn-Fusion, next generation of ZynAddSubFX. Although Mark McCurry only raised half the money he expected through selling binaries of Zyn-Fusion on Gumroad, he doesn't regret this decision a bit. On the last day of 2017, he released the final bit of source code he wrote for that project, so now anyone can build ZynAddSubFX with new, improved UI from source code.

From now on, the old UI is getting just bugfixes, all new stuff is happening in the new UI. The 3.1.x series is expected to focus on workflow improvements. If you don't have Zyn-Fusion in your Linux repo, you can have a go at build instructions.

After a spectacular launch around 2016, the free/libre Helm soft synth wasn't getting many updates in 2017. It might seem that Matt Tytel lost interest in the project, but he was actually rethinking it:

There were a bunch of things I wanted to change in Helm, but they would require ripping out most features. I'm going to fix more Helm bugs in the future, but I will not not add any features. I'm working on a new synth with a new name.

Again, no release dates.

NLEs

Unlike with DAWs, non-linear video editors is where it's quite impossible to mention one application without hearing "But you forgot [my libre NLE of choice]!". Indeed, there are just so many of them these days!

Pitivi, Shotcut, Kdenlive, Flowblade, OpenShot... Most of these projects have regular updates. Blender VSE reportedly still doesn't have a maintainer, but is now being improved by Nathan Lovato et al. via his Power Sequencer add-on. And, of course, we still have three flavours of Cinelerra. Even Lumiera still shows signs of life.

So in 2018, you are in for a treat, whichever non-linear video editor you end up using.

Mastering Inkscape in 2018: what’s best among books, courses, and tutorials

$
0
0

As someone who maintains social media accounts for a few free/libre software projects, one of the top questions I keep being asked is how/where to learn using this or that application. So this is an attempt to a definitive guide to various learning resources on Inkscape, free/libre vector graphics editor

Please note that I compiled this list based on my own criteria of usefulness. This basically means that I watched and read almost everything there is to watch and read, and then made up my mind if I think it's worth recommending. Thus it's inherently subjective. The list also covers only the resources in English.

That said, if you think I missed a useful Inkscape educational resource (the popular expression seems to be 'you forgot'), please do link to it in the comments section!

With that in mind, let's go.

Books

One thing that has to be immediately pointed out is that most books on Inkscape are somewhat outdated and cover v0.47 and v0.48 (all books published by Packtpub, in particular).

It has a lot to do with how book publishing works, and also with a 5 years long period between 0.48 released in 2009 and v0.91 released in 2015. Writing a book usually takes at least half a year, and publishers want to be sure that people who buy the book will actually have the software to go along with (it's why there aren't many new GIMP books, too).

Nevertheless, while there's always a ton of improvements in each Inkscape release, all basics are the same even 10 years later, and many advanced techniques are the same as well.

The online version of "Inkscape: Guide to a Vector Drawing Program" book by Tavmjong Bah is probably the only exception where you get up-to-date material. Even though Tav is very busy with working on Inkscape and participating at the SVG W3C working group's activities, he does his best at maintaining it.

Book by Tavmjong Bah

The book works best as a reference rather than a user guide. You may not like Tavmjong's dry technical style of writing, but he is extremely thorough, and for years it was pretty much the only comprehensive guide to Inkscape. Which is likely the reason it has been linked to right from the Help menu in Inkscape.

The online version is considered 5th edition and was last updated in 2017, while the last printed version is the 4th edition from 2011. You should keep that in mind, if you go for the hardcopy instead.

Dmitry Kirsanov's "The Book of Inkscape" is another book written by an actual Inkscape developer. Dmitry contributed numerous improvements and several new tools in the early days of Inkscape, under a nickname.

Book by Dmitry

Released in 2009, his book is a likewise thorough user guide that can also be used as a reference, but includes a number of quick tutorials to help you practice using Inkscape. One particularly great thing about this book is that Dmitry makes a point of using Inkscape efficiently, via keyboard shortcuts. Which really does help mastering this software.

The author's original intent, as explained in the preface, was to cover all Inkscape features and evangelize vector graphics. After 14 years of using Inkscape, I don't think I need much convincing, so no particular opinion there. But covering all features is what Dmitry did brilliantly. Although the book is 9 years old, it'll get you going just fine.

Video courses

It looks like Udemy is currently #1 resource of structured educational video material on Inkscape. As of February 2018, the platform provides over 20 paid Inkscape-focused or Inkscape-related courses.

I'll be honest with you: I haven't watched all available courses on Udemy, as it would put a dent on my budget. But these are the two I picked and watched, based on user-submitted reviews.

"Inkscape For Beginners 2016/2017" by Michael DiGregorio will work best as an introduction and a reference to Inkscape's toolbox and some basic features. Michael meticulously covered what each tool does. But don't expect to get much creative during the course, and there will be no course assignments.

Course by Michael

Also, you won't learn to draw anything fun, unless it's what you do additionally to the course. Another nitpick is that, personally, I found the quizzes not entirely representative of the information I was supposed to learn.

Still, for a video reference in features in a completely new application, this will work quite OK. And since audio quality is good, and Michael is a native English speaker, the course is quite watchable at 1.25x speed (stretching it to 1.5x might or might not work for you).

Since a few weeks, there's the second part of that course available. It covers more Inkscape's feature like live path effects, extensions, and more.

István Szép is currently the most productive Inkscape instructor on Udemy. He authored and co-authored 8 courses that involve using Inkscape. "Learn Inkscape now" is his top selling Inkscape course that is a rather good introduction to the application.

Courses by István

The benefit of István's intro course over Michael's course is that you get to make actual simple drawings in almost every video. It comes at the expense of not learning every single option of every tool though. So that's a choice you would have to make.

István is a native of Hungary, and as a non-native English speaker myself, I was quite able to follow his instructions, but judging by reviews, several students had a problem with his pronunciation, so there's that too.

Text/image tutorials

Two out of three resources that I can wholeheartedly recommend are devoted to using Inkscape for game design. It sounds like a niche thing, but the fact is that both authors can teach you how to use Inkscape to draw something that looks nice and can be done in maybe a dozen of steps. Isn't that what every beginner needs?

Chris Hildenbrand is one of the few people who exploded the Inkscape scene in the early 2010s by consistently releasing high-quality tutorials for beginner game designers, all requiring some basic skills and knowing your way around the application.

Chris Hildenbrand tutorial

Some nicer index of his older tutorials is available on Zeef.com. Next to tutorials, Chris also provides great feedback and advice to his readers.

Olga Bikmullina is another author of popular Inkscape tutorials. She's also a regular participant at the annual CG Event conference in Moscow where she talks about using Inkscape for game design (both the good bits and the bad bits).

While she hasn't written much for the past few years, her website has amassed enough easy-to-follow tutorials to keep you busy for quite a while.

Scroll made with Inkscape

Over the years, Envato published a bunch of tutorials on Tutsplus.com, mostly by Chris Hildenbrand and Aaron Nieze. They vary from tips-and-tricks advices to step-by-step tutorials to draw something like this adorable hedgehog:

Adorable hedgehog made with Inkscape, by Aaron Nieze

Video tutorials

Currently, Nick Saporito is probably the most popular author of Inkscape video tutorials, with over 150 Inkscape clips on YouTube. His videos are usually very easy to follow and focus on typical use cases: lettering, designing logos, making posters, drawing elements of infographics, and suchlike.

He also makes likewise good GIMP tutorials. All videos are neatly grouped into a variety of playlists. Check them out here.

The Grafikwork channel on YouTube can be a great source of learning new skills for illustrators. There is no narration, and all videos are timelapses. So it will be most useful for Inkscape users who have a good grasp of how various features work, but can't easily go from a bunch of basic shapes to a complete illustration.

The author is from Ukraine, and until fairly recently videos in the channel showed Inkscape in Russian. Those are still more or less easy to follow, since the author mostly uses basic drawing tools.

Grafikwork has over 80 tutorials currently and is usually updated once or twice a week.

Siddhesh's "Sids Art - Inkscape And Drawings" channel has over 70 timelapse tutorals on creating illustrations with Inkscape. A lot of them focus on flat design.

Graphic Design Studio has over 200 silent real-time tutorials on using Inkscape for simple design: logos, gift boxes, shopping bags, infographics, game assets etc. Pretty much the bread and butter of graphic design.

Swapnil Rane's MadFireOn channel mostly features tutorials on designing flat art backgrounds, but also some logo and icon design tuts. Commentary is more or less OK, but I really wish he got a better mic.

Ardent Designs channel has over 130 voiced video tutorials for both beginners and experienced Inkscape users. Usual topics are icons and logos. Visually, they are a bit of hit and miss, but they are commonly easy to follow, and narration is quite OK.

Apart from making structured video courses, István Szép put a dozen of timelapses on YouTube. They are quite fun to watch and help getting into the head of a designer and illustrator.

Around 2015, Chris Hildenbrand decided to up his game and make video instructions on using Inkscape for game design. He ended up creating a little over a dozen of videos that are very useful, but a little painful to watch, since the text wasn't scripted.

Butterscotch Shenanigans channel isn't focused on Inkscape, it's mostly about games they make. But Inkscape was one of their major tools during the production of Crashlands. So there are currently 9 illustration tutorials on just that: various illustrations for the game.

If you think you can learn from other artists' workflows (and funny commentary), do check them out. Please note that unlike most channels in the review, they haven't released any new Inkscape tuts since 2014.

What's next

One thing I think I need to stress strongly is that sticking to educational material on just Inkscape is probably the worst mistake you can make early in your career.

People who use other software make great art, there are workflow insights that can be easily transferred from app to app. And trying to replicate such tutorials with Inkscape will actually help you learn Inkscape better.

Krita 4.0 released, conversation with the team

$
0
0

After 1.5 years in the making, Krita 4.0 is finally released with a ton of improvements mostly for digital painting, but also with a number of features useful for general image editing.

Here are some of the release highlights:

  • Vector graphics tools rebuilt with SVG and improved
  • Much simplified creation of word balloons for comic artists
  • New, easier to use text tool
  • Python scripting, a set of rather useful plugins, and a manager for them
  • Colorize mask tool for quick colorization of sketches
  • Background saving of project files

You’ll find detailed information on these and other news features in the release notes.

Now that Krita v4.0 is finally out, we had a little conversation with Boudewijn Rempt, Wolthera van Hövell tot Westerflier, and Scott Petrovic.

First of all, congratulations to getting v4.0 out of the door. This release is quite an achievement! What are you most proud of, in terms of team work or maybe your own personal involvement?

Boudewijn: I'm proud of all of it. We started coding on this release in 2016, with the export warnings feature, and maintained coding velocity all through 2017, doing 3.x releases along the way. There's just so much in this release... And then today, we released Krita with scripting and almost immediately someone published a script to publish images on Mastodon from Krita. That's amazing, isn't it?

You had to rewrite both internals and parts of UI for this release. What were the challenges UI-wise during this release cycle? Or maybe internal changes were more prominent?

Boudewijn: There are some parts of Krita where the big challenge was to let them be, for now. We know that resource management is a tough problem, and our current code is... Well, it was designed for the simple nineties, and extended and expanded beyond what it can bear. But to redo that for this release just isn't feasible. Leaving something alone that we know that is so problematical, well, that's hard! Same with the text tool in some ways. I know what I want to do there, but I couldn't, being tied up with non-coding chores.

Scott: In terms of the UI, it was surprisingly easy for the most part. I think our team is getting more in sync with our design and development process with what needs to be done.
The vector tools and the text tool UI designs mostly had positive feedback and only minor tweaks. I spent a long time studying workflows and UI patterns across a variety of free and proprietary programs before presenting the current solution.

The most difficult part of the text and vector project was to establish what the functional requirements were and scoping how much could get done. With all UI design projects I try to plan for a long term vision that people get excited about. I then modify the UI depending on how the developers are doing with time. We prioritize the features that will make the biggest impact when we get in a time crunch.

The most challenging part of the UI for Krita 4.0 was all the updates to the brush editor. None of the work was planned, but I felt it was the UI area in Krita that needed the most attention.

The brush editor UI has been a certain way for a number of years. You could tell there were features slowly added over the years without the overall design being thought about. It was difficult for people to see the brush settings differently and break the existing mold.

There were a number of existing problems with the 3.x brush editor in our bug tracker. I tried to anchor the design discussions on solving the current problems instead of being scared when a disagreement arose. Logic behind UI decisions has been the driving force for solving disputes and breaking free from old patterns..

Wolthera: Or in my case, not at all interfering with the settings discussion. For a long time user, discussions like these always tend toward 'Oh, but, is changing all that really necessary? Isn't there more problematic areas to tend to?', I recognised that in myself, so I just left it alone having full faith that Scott wouldn't come up with something unusable.

After Scott was done, all that I did to the editor was a little bit of polish and we had to work something out to get a 'create preset from scratch' function. It isn't used super often, as artists tend to modify existing presets. But when we create a new brush engine or the user removes all presets but one, the from scratch function is kind of necessary to even make presets.

What are the major changes in vector features you want going after next?

Boudewijn: SVG Filters, patterns, fixing UI stuff, fixing more ui stuff, fixing bugs. So please do report bugs!

Actually, we already have an implementation for SVG filters, but it has been disabled for ages. You can enable it by adding a line like

ToolsBlacklist=CreatePathTool,KoPencilTool,ConnectionTool,
KarbonFilterEffectsTool,KritaShape/KisToolText,ArtisticTextTool,
TextTool

to your local ~/.config/kritarc and removing the filter tool entry

Not sure whether the filters save and load correctly, though.

Scott: I would like to see the vector tools and the text tools workflow merge a bit more than it currently does. Because of time constraints the user experience for the text tools had to diverge a bit than what we originally were planning. Right now some of the properties for text are in the vector area like border and fill, while the text editing properties are in the text tool. I am not sure what will get done though.

Do you follow development of SVG 2.0?

Boudewijn: We follow Tav :-) [Tavmjong Bah, developer of Inkscape and SVG working group member—LGW] I subscribe to his Patreon, and I follow his commits to Inkscape closely.

Gradient editor is currently in a docker. Is this temporary, with on-canvas editing planned for the future? Or will you keep it like this?

Wolthera: We're keeping it like this for now.

Scott: Right now all of our tools have tool options that can be used to customize how a tool works. Filling a vector object can have multiple ways to fill it: none, sold fill, gradient, etc. For each of these fill choices, there are additional options that can be specified. If the fill type is set to gradient, you can change the start and stop position on the canvas. The other properties must be set in the tool options. Keeping these options organized is very important in terms of usability, so I don’t see us specifically pulling those options out while keeping other options in a docker.

Why did you pick Python for scripting and not Lua (which is thread-safe)?

Boudewijn: Because pretty much the entire VFX world uses Python, and because I wrote a book about Python in 2000 :-). And... Well... There is no safety in threads.

How much of internals are covered by scripting for 4.0?

Wolthera: A very small part. We wrote a separate wrapper library to handle the more difficult to understand parts, and then turned those into Python bindings with SIP. Thing is, with each function we expose, we need to ensure that it doesn't crash somehow, or cause memory leaks, and just has good documentation. People shouldn't have to worry about those kind of things, as long as they carefully read the API docs.

Boudewijn: Let's first see what people do with what we have now, and where the problems are, and which bits of the API need changes.

Given that you had to cut down on text tool work because of all the troubles with taxes, I'm guessing there will be more text tool features after the v4.0 release?

Boudewijn: Sure. We'll probably even replace the layout engine completely at one point. That's why we made the editor communicate with the text object using SVG. But we had to have something after having had me waste most of 2017.

What's the current layout engine?

Wolthera: QTextLayout.

Doesn't Qt use the Harfbuzz OpenType layout engine anyway, deep down?

Wolthera: It does.

Boudewijn: Qt still has two versions of Harfbuzz embedded, even I've tried working on an existing patch to enable vertical text, but that broke all menus, and it's a big and complicated thing.

Wait, two versions of Harfbuzz? That's crazy! Why?

Boudewijn: Because it's a complicated thing to completely port away from Harfbuzz v1. That version has some functions to work with glyph tables that v2 doesn't have, I think.

So I first started working on removing the copy of Harfbuzz 1 from Qt, but there's no migration guide, and both versions of Harbuzz have very little real documentation...

Do you have variable fonts on the radar?

Wolthera: Our goals are, font support wise:

  1. Get text flow in all important directions working.
  2. Get word wrap and text on path to work/
  3. Get OpenType features like ligatures and suchlike work.

And maybe FreeType 2.8 will be widespread enough by that time, so we can add variable fonts later.

Just to make sure: "text flow in all important directions" as in top-to-bottom block progression, basically, vertical writing systems?

Wolthera: Yes.

Earlier you mentioned focusing even more on painting and dropping photography-related features such as the RAW loader. Is that still the plan?

Boudewijn: I still want to. I don't think anyone uses it at the moment.

After all the work you've done on v4.0 and all the last year troubles, do you feel like you should take a vacation, go south, and just sit there watching sunrises on a sea shore and do nothing for a week or two? :)

Boudewijn: I'll be going to sunny Sevilla! Well, for LGM, but it will almost be a vacation! And tomorrow I'm going to have a haircut and spend some time drawing. Or bugfixing. Not sure yet.

Having read about your problems with the tax service, your community pretty much embraced you in a warm fuzzy cloud of love and support :) Where are you financially today, and how many people work on Krita full-time or part-time?

Boudewijn: We got our financial buffer back, which is good. We also have done one project with Intel, for the multi-core optimizations and are doing another one; that is funding Jouni's part-time work. Dmitry is funded by donations, and I am funded by the Windows Store sales. I burned out having a full-time job in a volatile company in 2016, then had the tax trouble, so it's a good thing I don't have to hold down a day job in addition to the Krita work right now. We'll be discussing another fundraiser during the sprint in May.

What’s the best way to support you while not being a programmer?

Boudewijn: Triage bugs! We're getting so many bug reports that are just thinly veiled support requests, and those need to be weeded out before I see them. We get more than a thousand bug reports a year, and they all need a good and careful answer, but only about a third need coding.

Wolthera: Beyond that, helping other Krita users, making write-ups of your workflow/making tutorials is really useful too. Being a free graphics program we attract a lot of people who don't understand computers but they want to draw and Krita is a program that allows drawing. This leads to a lot of basic problems like people not understanding that different file formats store layers or not, or why one brush is slower than the other, or just not having any idea of a painting workflow, so people answering those questions are very very welcome.

Viewing all 328 articles
Browse latest View live