Metrics of Haters

When I posted my Closing a Door post, I mentioned that a team of moderators would be filtering comments for me. Comments that did not meet my comment policy would not be approved. Moderators also found that some comments simply did not further the conversation, were unclear and confusing due to translation issues, or were just contentless spews of hatred.

The comments on that post are now closed. The moderators approved a total of 254 comments, with 213 comments on my “Closing a Door” post, and 39 comments on my follow-up post “What Makes A Good Community?” The moderators also filtered out 186 comments total on those two posts. Now that the internet shit storm is over, I thought it would be interesting to take a peek into the acid-filled well in order to pull out some metrics.

Of course, I didn’t want to actually read the comments. That be silly! It would completely defeat the purpose of having comment moderators and let the trolls win. So, instead I used the power of open source to generate the metrics.  I used the WordPress Exporter plugin to export all the comments on the two posts in XML. Then I used the python wpparser library to parse the XML into something sensible. From there, the program wrote the commenters’ names, email addresses, and IP addresses [1] into a CSV. I did some manual categorization of that information in Google docs.

Repeat Offenders or Drive-by Haters?

70% of the 186 filtered comments were from unique IP addresses. The remaining 30% of comments were generated by 19 different people, who left an average of three comments each. The most persistant troll commented 10 times.

Anonymous Cowards or Brave Truth Tellers?

72% of the 186 comments did not include a full name. Of the commenters that did not include a full name:

  • 39 people used just a first name, making up 24% of the comments.
  • 25 people used what looks like internet nicks, accounting for 16% of the comments.
  • 17 people used various forms of the word “anonymous” in the name field, making up 9% of the comments.
  • 12 people used an English word instead of a name, accounting for 8% of the comments.
  • 4 people used obviously fake names, accounting for 7% of the comments.
  • 8 people used their initials or one letter, accounting for 5% of the comments.
  • 5 people used a slur in their name, accounting for 3% of the comments.
  • 2 people used a threat in their name, accounting for 1% of the comments. [Edit: make that 3, or 2%]

Community Members or Internet Trolls?

38 people used a full name, accounting for 28% of the comments. That means approximately 1/3 were brave enough to put their real name behind their comments. (Or a full fake name.) The question becomes, are these people actually a part of the open source community? Are they people who have actually interacted on an open source mailing list before? To answer these questions, I choose to search the author name in the Mailing List Archives (MARC) where a variety of open source mailing lists are archived, including the Linux kernel subsystem mailing lists, BSD, database lists, etc.

Of the 38 people who used their real name, 14 people had interacted on an open source mailing list archived by MARC. They made up 8% of the filtered comments. Ten of those people had more than 10 mails to the lists.

[Edit] Of the 25 people that used what looked like internet nicks, 11 of them may be open source users (see analysis below in the comments). That accounted for 8% of the filtered comments.

The important take away here is that only 16% of the filtered comments were made by open source users and developers. This is an important finding, since the article itself was about open source community dynamics.

[1] Before you scream about privacy, note that my comment policy allows me to collect and potentially publish this information.

Building a custom Intel graphics stack

When I worked as a Linux kernel developer, I often ran across people who were very concerned about compiling and installing a custom kernel. They really didn’t like running the bleeding edge kernel in order to check that a specific bug still existed. Who can blame them? A rogue kernel can corrupt your file system and you could lose data.

Fortunately, there is a safer way to build the latest version of drm and Mesa in order to check if an Intel bug still exists in the master branch. Since Mesa is just a userspace program, it is possible to install it to a custom directory, set a lot of environment variables right, and then your programs will dynamically link against that custom drm and Mesa binaries. Your desktop programs will still run under your distro’s system-installed mesa version, but you can run other programs linked against your custom mesa.

Unfortunately, mesa has some dependencies, and the instructions for how to build into a custom directory are kind of scattered all over the mesa homepage, the DRI wiki, the Xserver wiki, github, and mailing list posts. I’m going to attempt to condense these instructions into one single place, and then clean up those pages to be consistent later.

Debug build or better performance?

In this tutorial, I’ll assume that you want to build a version of drm and mesa with debugging enabled. This *will* slow down performance, but it will enable you to get backtraces, run gdb, and gather more debugging information than you normally would. If you don’t want a debug build, remove the parts of the commands that add the “debug” flag to the USE environment variable or files.

The point of this tutorial is to be able to install drm and mesa in a directory, so that you don’t have to install them over your distro’s binaries. This means you’ll be able to run the specific test you need, without running into other bugs by running your full desktop environment on the bleeding edge graphics stack. In this tutorial, I will assume you want to put your graphics installation in $HOME/graphics-install. Change that to whatever your heart’s desire is.

If you are working behind a proxy, you’ll need to have a .gitconfig file in your homedir that tells git how to clone through the proxy.

I also assume you’re running a debian-based system, specifically Ubuntu 14.04 in my case. If you’re on an RPM-based distro, change the package install commands accordingly.

Get mesa dependencies

sudo apt-get build-dep libdrm mesa mesa-utils
sudo apt-get install linux-headers-`uname -r` \
    libxi-dev libxmu-dev x11proto-xf86vidmode-dev \
    xutils-dev mesa-utils llvm git autoconf automake \
    libtool ninja-build libgbm-dev

Clone the repositories

mkdir git; cd git
git clone git://
git clone git://
git clone git://
git clone git://
git clone git://
git clone git://

Set up Chad’s development tools

Chad Versace has been working on a set of scripts that will set up all the right environment variables to run programs that will use custom-installed mesa and drm binaries. Let’s get those configured properly.

Edit your .bashrc to include the following lines:

export GOPATH=$HOME/go
export PATH=/usr/lib/ccache:$HOME/bin:$PATH:$GOPATH/bin
export PYTHONPATH=~/bin/mesa-dev-tools/bin/:$PYTHONPATH

Now it’s time to set up Chad’s tools.

cd dev-tools/

We’ll be installing everything in ~/graphics-install, so we need to create a file with this contents:

prefix := $(HOME)/graphics-install
USE := "debug"

This will add the debug flag to all builds, which will add symbols so you can use gdb (as well as add some additional code that could impact performance, so don’t add the flag if you’re doing performance testing!).

Build and install the development scripts:

make && make install

Exit your current shell, and start a new shell, so that the changes to the .bashrc and the installation of Chad’s scripts take effect.

Next, we need to get all the paths set properly to use Chad’s scripts to build mesa and libdrm into ~/graphics-install. We invoke the prefix-env script, and tell it to exec the command to start a new shell:

cd git/dev-tools
PREFIX="$HOME/graphics-install" USE="debug" \
    bin/prefix-env exec --prefix=$HOME/graphics-install bash

Double check that worked by seeing whether we have the right mesa-configure script on our path:

sarah@dingo:~/git/dev-tools$ which mesa-configure

Check which Mesa version you’re running. Later, after installing a custom Mesa, we’ll verify the installation by confirming that the active Mesa version has changed.

sudo glxinfo > /tmp/glxinfo-old.txt

Note that glxinfo calls through the Xserver to get the information for what Mesa we’re using. Note that if your system xorg installation is too old, the Xserver won’t be able to find an API-compatible version of Mesa, and you’ll see errors like:

Error: "couldn't find RGB GLX visual or fbconfig"

Fortunately, we can run many Mesa programs without involving the Xserver. Another way to find what version of mesa you’re running without going through the Xserver is to use wflinfo command:

sudo wflinfo --platform gbm --api gl > /tmp/wflinfo-old.txt

We can see which version of mesa (11.0.2) is installed by default on Ubuntu 14.04:

sarah@dingo:~/git/dev-tools$ grep Mesa /tmp/*info-old.txt
client glx vendor string: Mesa Project and SGI
OpenGL core profile version string: 3.3 (Core Profile) Mesa 11.0.2
OpenGL version string: 3.0 Mesa 11.0.2
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 11.0.2

Now, that we have Chad’s tools set up, and we’ve set up the environment variables, it’s important to build everything from the shell where you ran the prefix-env command. If you need to open up additional virtual consoles, make sure to change into ~/git/dev-tools/ and re-run the prefix-env command.

Building libdrm

The direct rendering manager library, libdrm, is a prerequistite for mesa. The two projects are pretty intertwined, so you need to have updated installations of both.

Change directories into your libdrm repository, and configure libdrm with the libdrm-configure script (note that PREFIX was already set when we exec’ed with the prefix-env script):

USE="debug" libdrm-configure
make && make install

Building Mesa

Change directories into your mesa repository, and configure mesa with the mesa-configure script:

cd ../mesa
USE="debug" mesa-configure
make && make install

Building Waffle

Waffle is a library for selecting an OpenGL API and window system at runtime.

cd ../waffle
USE=debug waffle-configure
ninja && ninja install

For some reason, waffle is different from all the other projects, and likes to install libraries into $PREFIX/lib/x86_64-linux-gnu/ If you’re on a debian-based system, you may have to change the configuration files, or simply move the libraries one directory down.

Building glxinfo

Confusingly, a useful debugging tool like glxinfo is found in a mesa repository named demos. Change into that directory:

cd ../demos

Since Chad’s tools don’t cover installation of the demo tools, we’ll have to configure them by hand:

autoreconf --verbose --install -s
./configure --prefix="$HOME/graphics-install"
make -j8 && make install

Confirm installation

Confirm that the environment’s Mesa version matches the version you installed. It should differ from the Mesa version we checked earlier.

sudo glxinfo > /tmp/glxinfo-new.txt

Or run wflinfo instead:

sudo wflinfo --platform gbm --api gl > /tmp/wflinfo-new.txt
grep Mesa /tmp/*info-new.txt

You should see something about a development version of mesa in the output.

Building Piglit

Piglit is the test infrastructure and tests for libdrm and mesa.  Let’s build it:

cd ../piglit
USE=debug piglit-configure

Piglit has a slightly different build system than drm, mesa, and waffle. After the first build, the dependencies for piglit tests means it takes a very long time to recompile the tests after a small change is made. This is due to the simplicity of cmake. Instead, it’s recommended to use the ninja build system with piglit.

Make piglit:


Install piglit:

ninja install

Run your tests

Anytime we want to use the newly installed mesa and drm, we need to rerun the prefix-env script to set up all the graphics environment variables to point to those binaries:

PREFIX="$HOME/graphics-install" USE="debug" \
    bin/prefix-env exec --prefix=$HOME/graphics-install bash

Since we haven’t compiled the full Xserver stack, we have to run piglit with a different platform than X11. If you run `piglit run –help`, you’ll see that a platform could be x11_egl, glx (which is actually calling the GLX api through the Xserver), mixed_glx_egl (which also implies going through the Xserver), wayland, and gbm. The most simple platform is gbm.

Here’s how you run your very first sanity test with gbm:

PIGLIT_PLATFORM=gbm ./piglit run \
    tests/sanity.tests results/sanity.results

If the output says you passed, give yourself a pat on the back! If not, you probably don’t have something installed correctly. You may want to exit all shells, run `git clean -dfx` (to clean out all tracked files) in all the repos, and try again.

To get a more detailed test report, you can run the `piglit summary` command with either console (for text output, good if you don’t have X running), or with html to generate pretty webpages for you to look at on another machine. Piglit will also output test results in a form Jenkins can use.

./piglit summary console results/sanity.results
./piglit summary html --overwrite summary/sanity results/sanity.results

You’ll need the overwrite or append flag if you’re writing results to the same directory.

There’s even more explanations of what you can do with piglit on these two blog posts.

Running games or benchmarks

Once you’ve run the prefix-env script, you should be able launch benchmarks or other tests. Running games or steam with a custom mesa installation is harder. Since most games are going to use the Xserver platform to call into Mesa’s GL or EGL API, you may need to compile a new Xserver as well.

Optional kernel installation

Sometimes you may want to run the bleeding-edge Intel graphics kernel. Confusingly, the kernel isn’t hosted on! Use the drm-intel-nightly branch from the drm-intel repo on

git clone git://

Instructions on compiling a custom kernel can be found here:

Additionally, you may need to set the i915 kernel module parameter to enable new hardware support.  You can do this by changing the line your grub configuration defaults in /etc/default/grub to this:


And then you’ll need to update your grub configuration files in /boot by running:

sudo update-grub

Graphics linkspam: Bugs, bugs, I’m covered in bugs!

Reporting bugs to Intel graphics developers (or any open source project) can be intimidating. You want the right developers to pay attention to your bug, so you need to provide enough information to help them classify the bug. Ian Romanick describes what makes a good Mesa bug report.

One of the things Ian talks about is tagging your bug report with the right Intel graphics code name, and providing PCI ID information for the graphics hardware. Chad Versace provides a tool to find out which Intel graphics you have on your system. That tool is also useful for translating the marketing names to code names and hardware details (like whether your system is a GT2 or GT3).

In the “omg, that’s epic” category, Adrian  analyzes the graphics techniques used in Grand Theft Auto V on PS3. It’s a great post with a lot of visuals. I love the discussion of deleting every-other-pixel to improve performance in one graphics stage, and then extrapolating them back later. It’s an example of something that’s probably really hardware specific, since Kristen Hogsberg mentioned he doesn’t think it will be much help on Intel graphics hardware. When game designers know they’re only selling into one platform, they can use hardware-specific techniques to improve graphics performance. However, it will bite them later if they try to port their game to other platforms.

How to approach a new system: Linux graphics and Mesa

By now, most of the Linux world knows I’ve stopped working on the Linux kernel and I’m starting work on the Linux graphics stack, specifically on the userspace graphics project called Mesa.

What is Mesa?

Like the Linux kernel, Mesa falls into the “operating system plumbing” category. You don’t notice it until it breaks. Or clogs, slowing down the rest of the system. Or you need plumbing for your shiny new hot tub, only you’re missing some new essential feature, like hot water or tessellation shaders.

The most challenging and exciting part of working on systems “plumbing” projects is optimization, which requires a deep understanding of both the hardware limitations below you in the stack, and the needs of the software layers above you. So where does Mesa fit into the Linux graphics stack?

Mesa is the most basic part of the userspace graphics stack, sitting above the Linux kernel graphics driver that handles command submission to the hardware, and below 3D libraries like qt, kde, or game engines like Unity or Unreal. A game engine creates a specialized 3D program, called a shader, that conforms to a particular rev of a graphics spec (such as EGL 3.0 or GLES 3.1). Mesa takes that shader program and compiles it into graphics hardware commands specific to the system Mesa is running on.

What’s cool about Mesa?

The exciting thing for me is the potential for optimizing graphics around system hardware limitations. For example, you can optimize the compiler to generate less graphics commands. In theory, less commands means less for the hardware to execute, and thus better graphics performance. However, if each of those commands are really expensive to run on your graphics hardware, that optimization can actually end up hurting your performance.

This kind of work is fun for me, because I get to touch and learn about hardware, without actually writing Verilog or VHDL. I get to make hardware do interesting things, like add a new features that makes the latest Steam game actually render or add an optimization that improves the performance of a new open source Indie game.

Understand how much you don’t know

Without knowledge of both the hardware and software layers above, it’s impossible to evaluate what optimizations are useful to pursue. For example, the first time I heard modern Intel GPUs have a completely separate cache from the CPU, I asked two questions: “What uses that cache?” and “What happens when the cache is full?” The hardware Execution Units (EUs) that execute graphics commands use the cache to store metadata called URBs. My next question, on discovering that URB size could vary, and that newer hardware had an ever-increasing maximum URB size, was to ask, “How does the graphics stack pick which URB size to use?” Obviously, picking a smaller URB size means that more URBs can fit in the cache, which means it’s less likely that an EU will have to stall until there’s room in the cache for the URB it needs for the set of graphics commands its executing. Picking an URB size that was too small would mean programs that use a lot of metadata wouldn’t be able to run.

Mesa is used by a lot of different programs with different needs, and each may require different URB sizes. The hardware has to be programmed with the maximum URB size, and changing that requires stopping any executing commands in the graphics pipeline. So you can’t go changing the URB size on the fly every time a new 3D graphics program starts running, or you would risk stalling the whole graphics stack. Imagine your entire desktop system freezing every time you launched a new Steam game.

Asking questions helps you ramp faster

It’s my experience with other operating system components that lead me to ask questions of my more experienced graphics developer co-workers. I really appreciate working in a community where I know I can ask these kinds of basic questions without fear of backlash at a newcomer. I love working on a team that chats about tech (and non tech!) over lunch. I love the easy access of the Intel graphics and DRI irc channels, where people are willing to answer simple questions like “How do I build the documentation?” (Which turns out to be complex when you run into a bug in doxygen that causes an infinite loop and increasing memory consumption until a less-than-capable build box starts to freeze under memory pressure. Imagine what would have happened if the experienced devs assumed I was a n00b and ignored me?)

My experience with other complex systems makes me understand that the deep, interesting problems can’t be approached without a long ramp up period and lots of smaller patches to gain understanding of the system. I do chafe a bit as I write those patches, knowing the interesting problems are out there, but I know I have to walk before I start running. In the meantime, I find joy in making blog posts about what I’m learning about the graphics pipeline, and I hope we can walk together.

What makes a good community?

*Pokes head in, sees comments are generally positive*

There’s been a lot of discussion in my comment sections (and on LWN) about what makes a good community, along with suggestions of welcoming open source communities to check out. Your hearts are in the right place, but I’ve never found an open source community that doesn’t need improvement. I’m quite happy to give the Xorg community a chance, mostly because I believe they’re starting from the right place for cultural change.

The thing is, reaching the goal of a diverse community is a step-by-step process. There are no shortcuts. Each step has to be complete before the next level of cultural change is effective. It’s also worth noting that each step along the way benefits all community members, not just diverse contributors.

Level 0: basic human decency

In order to attract diverse candidates, you need to be known as a welcoming community, with a clear set of agreed-upon social norms. It’s not good enough to have a code of conduct. Your leaders need to be actively behind it, and it needs to be enforced.

A level 0 welcoming community exhibits the following characteristics:

Level 1: on-boarding

The next phase in improving diversity is figuring out how to on-board newcomers. If diverse candidates are only 1-10% of newcomers, but you have a 90% fail rate for people who try to make their first contribution, well, you can’t expect many diverse newcomers to stick around, can you? It’s also essential to explain your unwritten tribal knowledge, so that diverse candidates (who are more likely to be afraid of upsetting the status quo) know what they’re getting into.

Signs of a level 1 welcoming community:

  • Documentation on where to interact with the community (irc, mailing list, bug tracker, etc)
  • In-person conferences to encourage networking with new members
  • Video or in-person chats to put a face to a name and encourage empathy and camaraderie
  • Documented first steps for compiling, running, testing, and polishing contributions
  • Easy, no-setup web harness for testing new contributions
  • Step-by-step tutorials, which are kept up-to-date
  • Coding style (what’s required and what’s optional, and who to listen to when developers disagree)
  • Release schedule and feature cut-off dates
  • How to give back non-code contributions (bug reports, docs, tutorials, testing, event planning, graphical design)

Level 2: meaningful contributions

The next step is figuring out what to do with these eager new diverse candidates. If they’ve made it this far through the gauntlet of toxic tech culture, they’re likely to be persistent, smart, and seeking a challenge. If you don’t have meaningful bigger projects for them to contribute to, they’ll move onto the next shiny thing.

Signs of a level 2 welcoming community:

  • Newbie todo lists
  • Larger, self-contained projects
  • Welcoming, available mentors
  • Programs to pay newbies (internships, summer of code, etc)
  • Contributors are thanked with heartfelt sincerity and an explicit acknowledgment of what was good and what could be improved
  • Community creates a casual feedback channel for generating ideas with newcomers (irc, mailing list, slack, whatever works)
  • Code of conduct encourages developers to assume good intent

Level 3: succession planning

The next step for a community is to figure out how to retain those diverse candidates. How do you promote these new, diverse voices in order to ensure they impact your community at a leadership level? If your leadership is stale, comprised of the same “usual faces”, people will leave when they start wanting to have more of a say in decisions. If your community sees bright diverse people quietly leave, you may need to focus on retention.

Signs of a level 3 welcoming community:

  • Reviewers are rewarded and questions from newcomers on unclear contributions are encouraged
  • Leaders and/or maintainers are rotated on a set time schedule
  • Vacations and leaves of absence are encouraged, so backup maintainers have a chance to learn new skills
  • Community members write tutorials on the art of patch review, release management, and the social side of software development
  • Mentorship for new presenters at conferences
  • Code of conduct encourages avoiding burnout, and encourages respect when people leave

Level 4: empathy and awareness

Once your focus on retention and avoiding developer burnout is in place, it’s time to tackle the task most geeks avoid: general social issues. Your leaders will have different opinions, as all healthy communities should! However, you need to take steps to ensure the loudest voice doesn’t always win by tiring people out, and that less prominent and minority voices are heard.

Signs of a level 4 welcoming community:

  • Equally values developers, bug reporters, and non-code contributors
  • Focuses on non-technical issues, including in-person discussions of cultural or political issues with a clear follow-up from leaders
  • Constantly improves documentation
  • Leadership shows the ability to recognize their mistakes and change when called out
  • Community manager actively enforces the code of conduct when appropriate
  • Code of conduct emphasizes listening to different perspectives

Level 5: diversity

Once you’ve finally got all that cultural change in place, you can work on actively seeking out more diverse voices and have a hope of retaining them.

Signs of a level 5 welcoming community:

  • Leadership gatherings include at least 30% new voices, and familiar voices are rotated in and out
  • People actively reach outside their network and the “usual faces” when searching for new leaders
  • Community participates in diversity programs
  • Diversity is not just a PR campaign – developers truly seek out different perspectives and try to understand their own privilege
  • Gender presentation is treated as a non-issue at conferences
  • Conferences include child care, clearly labeled veggie and non-veggie foods, and a clear event policy
  • Alcoholic drinks policy encourages participants to have fun, rather than get smashed
  • Code of conduct explicitly protects diverse developers, acknowledging the spectrum of privilege
  • Committee handling enforcement of the code of conduct includes diverse leaders from the community

The thing that frustrates me the most is when communities skip steps. “Hey, we have a code of conduct and child care, but known harassers are allowed at our conferences!” “We want to participate in a diversity program, but we don’t have any mentors and we have no idea what the contributor would work on long term!” So, get your basic cultural changes done first, please.

*pops back off the internet*

Edit: Please stop suggesting BSDs or Canonical/Ubuntu as “better” communities.

Closing a door

This post has been sitting in my drafts folder for a year now. It has never been the right time to post this. I have always been worried about the backlash. I’ve skirted around talking about this issue publicly for some time, but not acknowledging the elephant in the room has eaten away at me a bit. So, here goes.

Here’s the deal: I’m not a Linux kernel developer any more. I quietly transferred the maintainership of the USB 3.0 host controller driver in May 2014. In January 2015, I stepped down from being the Linux kernel coordinator for the FOSS Outreach Program for Women (OPW), and moved up to help coordinate the overall Outreachy program. As of December 6 2014, I gave what I hope is my last presentation on Linux kernel development. I was asked to help coordinate the Linux Plumbers Conference in Seattle in August 2015, and I said no. My Linux Foundation Technical Advisory Board (TAB) term is soon over, and I will not be running for re-election.

Given the choice, I would never send another patch, bug report, or suggestion to a Linux kernel mailing list again. My personal boxes have oopsed with recent kernels, and I ignore it. My current work on userspace graphics enabling may require me to send an occasional quirks kernel patch, but I know I will spend at least a day dreading the potential toxic background radiation of interacting with the kernel community before I send anything.

I am no longer a part of the Linux kernel community.

This came about after a very long period of thought, and a lot of succession planning. I didn’t take the decision to step down lightly. I felt guilty, for a long time, for stepping down. However, I finally realized that I could no longer contribute to a community where I was technically respected, but I could not ask for personal respect. I could not work with people who helpfully encouraged newcomers to send patches, and then argued that maintainers should be allowed to spew whatever vile words they needed to in order to maintain radical emotional honesty. I did not want to work professionally with people who were allowed to get away with subtle sexist or homophobic jokes. I feel powerless in a community that had a “Code of Conflict” without a specific list of behaviors to avoid and a community with no teeth to enforce it.

I have the utmost respect for the technical efforts of the Linux kernel community. They have scaled and grown a project that is focused on maintaining some of the highest coding standards out there. The focus on technical excellence, in combination with overloaded maintainers, and people with different cultural and social norms, means that Linux kernel maintainers are often blunt, rude, or brutal to get their job done. Top Linux kernel developers often yell at each other in order to correct each other’s behavior.

That’s not a communication style that works for me. I need communication that is technically brutal but personally respectful. I need people to correct my behavior when I’m doing something wrong (either technically or socially) without tearing me down as a person. We are human. We make mistakes, and we correct them. We get frustrated with someone, we over-react, and then we apologize and try to work together towards a solution.

I would prefer the communication style within the Linux kernel community to be more respectful. I would prefer that maintainers find healthier ways to communicate when they are frustrated. I would prefer that the Linux kernel have more maintainers so that they wouldn’t have to be terse or blunt.

Sadly, the behavioral changes I would like to see in the Linux kernel community are unlikely to happen any time soon. Many senior Linux kernel developers stand by the right of maintainers to be technically and personally brutal. Even if they are very nice people in person, they do not want to see the Linux kernel communication style change.

What that means is they are privileging the emotional needs of other Linux kernel developers (to release their frustrations on others, to be blunt, rude, or curse to blow off steam) over my own emotional needs (the need to be respected as a person, to not receive verbal or emotional abuse). There’s an awful power dynamic there that favors the established maintainer over basic human decency.

I’m not posting this for kernel developers. I’m not posting this to point fingers at specific people. I’m posting this because I grieve for the community that I no longer want to be a part of. I’m posting this because I feel sad every time someone thanks me for standing up for better community norms, because I have essentially given up trying to change the Linux kernel community. Cultural change is a slow, painful process, and I no longer have the mental energy to be an active part of that cultural change in the kernel.

I have hope that the Linux kernel community will change over time. I have been a part of that change, and the documentation, tutorials, and the programs that I’ve started (like the Outreachy kernel internships) will continue to grow in my absence. Maybe I’ll be back some day, when things are better. I have a decades long career in front of me. I can wait. In the meantime, there’s other, friendlier open source communities for me to play in.

When one door closes, another opens; but we often look so long and so regretfully upon the closed door that we do not see the one which has opened for us.

– Alexander Graham Bell

(FYI, comments will be moderated by someone other than me. As this is my blog, not a government entity, I have the right to replace any comment I feel like with “fart fart fart fart”. Don’t expect any responses from me either here or on social media for a while; I’ll be offline for at least a couple days.)

Edit: I would highly recommend you read my follow-up post, “What makes a good community”

Edit 2: Please stop suggesting BSDs or Canonical/Ubuntu as “better” communities.

I won Red Hat’s Women in Open Source Award!

At Red Hat Summit, I was presented with the first ever Women in Open Source Award.  I’m really honored to be recognized for both my technical contributions, and my efforts to make open source communities a better place.

For the past two years, I’ve worked as a coordinator for Outreachy, a program providing paid internships in open source to women (cis and trans), trans men, genderqueer people, and all participants of the Ascend Project.  I truly believe that newcomers to open source thrive when they’re provided mentorship, a supportive community, and good documentation.  When newcomers build relationships with their mentors and present their work at conferences, it leads to job opportunities working in open source.

That’s why I’m donating the $2,500 stipend for the Women in Open Source Award to Outreachy.  It may go towards internships, travel funding, or even paying consultants to advise us as we expand the program to include other underrepresented minorities.  There’s a saying in the activist community, “Nothing about us without us.”  We want to make sure that people of color are involved with the effort to expand Outreachy, and it’s unfair to ask those people to perform free labor when they’re already paid less than their white coworkers, and they may even be penalized for promoting diversity.

I urge people to donate to Outreachy, so we can get more Outreachy interns to conferences, and expand our internships to bring more underrepresented minorities into open source.  Any donation amount helps, and it’s tax deductible!

Sarah Sharp wins Women in Open Source Award!

OPW Successes and Succession Planning

It’s been a busy winter for the FOSS Outreach Program for Women (OPW).  On October 13, 2014, seven (yes, seven!) of the former Linux kernel OPW interns presented their projects at LinuxCon Europe.  From left to right, the OPW alumni are: Valentina Manea, Kristina Martšenko, Ana Rey, Sarah Sharp (coordinator), Himangi Saraogi, Teodora Băluţă, Andreea-Cristina Bernat, and Rashika Kheria.

OPW Kernel Interns & Coordinator at LinuxCon Europe 2014

The OPW presentation room was packed, and I had a couple Linux kernel developers come up to me afterwards and say, “I didn’t realize how complex some of the projects were!”  The OPW Linux kernel interns presented their work on Staging IIO drivers, Coccinelle, RCU, removing tree-wide warnings to allow more gcc warnings flags to be turned on, USB over IP, displaying kernel oopses in QR codes, and netfilter tables.  Slides are available here.

I’d like to thank the Linux Foundation for covering additional travel costs for several of the interns.  I would also like to thank the internship sponsors, Intel, Linux Foundation, Linaro, and Codethink.  Finally, this year’s internships could not have been possible without the time volunteered by our Linux kernel mentors: Adrian Chadd, Bob Copeland, Andy Grover, Nick Kossifidis, Greg Kroah-Hartman, Paul McKinney, Pablo Neira, Julia LaWall, Rik van Riel, Luis R. Rodriguez, Josh Triplett, and Peter Waskiewicz Jr.

The OPW application period for the December 2014 to March 2015 internships closed on Friday, Oct 31.  The Linux kernel continues to be the most popular project in OPW:

  • 24 people applied for an internship and got at least one patch into the Staging tree
  • 551 staging cleanup patches were accepted during the one-month application process
  • 382 files changed, 3464 insertions(+), 4243 deletions(-)

I’m pleased to announce that the following people have been selected for OPW Linux kernel internships from December 2014 to March 2015:

  • Iulia Manda will work with Josh Triplett on kernel tinification.
  • Tina Ruchandan will work with Arnd Bergmann on fixing the kernel subsystems that still have issues with 32-bit timer wrap in 2038.
  • Tapasweni Patha will work with Julia Lawall and Nicolas Palix on using Coccinelle to track the increase of common bug patterns since a whitepaper was published in 2011.
  • Roberta Dobresc will work with Octavian Purdila and Daniel Baluta on migrating Staging IIO drivers to use the kernel’s I/O APIs.
  • Ebru Akagundu will work with Rik van Riel on improving transparent huge page swap performance.

I would like to thank the sponsors for this round: Intel, Codethink, and Samsung.  Without their funding, it would not be possible to pay the OPW interns for their hard work.

The OPW Linux kernel internships have been a great success.  So it is with mixed, bittersweet feelings that I announce I will be stepping down as the coordinator for the Linux kernel OPW internships.  The program is running smoothly, and I find joy every day in watching the OPW Linux kernel interns learn and grow.  However, I am no longer a part of the Linux kernel community.  Julia Lawall, the maintainer of Coccinelle, will be stepping into the role of OPW kernel coordinator.

Fortunately, I’m not leaving the OPW program altogether.  I’ve agreed to step up to help Marina Zhurakhinskaya and Karen Sandler coordinate the larger OPW program, and I will be there to help Julia when she takes over as kernel coordinator in February 2015.

I am proud to have jump-started the effort to make the Linux kernel community more diverse by providing a pipeline for women to get involved in Linux kernel development.  With the help of mentors and sponsors, we’re slowly increasing the diversity of the Linux kernel community.  Four of the eleven OPW interns from the first year the kernel participated in OPW have gotten jobs as Linux kernel developers.

We’re not only improving the community for women developers. We’re helping all newcomers by creating detailed tutorials on how to make your first Linux kernel patch and sharing tips for how to break into open source development culture.  OPW is encouraging Linux kernel developers to think about mentorship and sharing their todo lists, which can only help newcomers find larger kernel projects to tackle. Diversity efforts improve communities for everyone.

Installing Debian on ASUS UX301LA

I ran into some issues with installing Debian on my new Hawell ASUS zenbook laptop. I’m documenting it on my blog to try and ease anyone else’s struggles.

I used the Debian CD image instead of the netinstall. I accidentally installed Debian stable, but later added apt sources to use Debian testing. I used the 20140712 Wheezy amd64 install ISO on a USB 3.0 flash drive.  The drive was plugged into the right side, but I doubt that makes a difference.

Step 1: Reconfigure the BIOS boot options.

Plug in the USB drive, and boot the laptop while pressing F2 to go into the BIOS.  Go into the boot menu, secure boot menu, and disable secure boot.  Go back to the boot menu and enable CSM. Change the boot order so that the USB UEFI boot option is first (before the Windows boot option).  Save, unplug the USB drive, exit BIOS, and boot into Windows.

Step 2: Repartition drives in Windows.

Hit Windows key, type ‘disk’ and click the partitioner.  I had the laptop version with two 256GB drives with software raid, which showed up as a 256GB drive D:/ and a 200GB drive C:/.  I deleted drive D:/, and shrunk drive C:/ as much as the partitioner would allow (which still left 60GB free for Windows to use for my Steam games that don’t have Linux support yet).

Step 3: Go through some of the Debian installer process.

Start the Debian installer. Ignore that it can’t find our network device, pick a host name and add root and normal users.  Choose ‘Manual’ when it comes to partitioning disks.  You’ll see the software raid partition looks like this:

Step 4: Fix the EFI partition.  Select the fat32 EFI system partition, and change ‘Use as’ to ‘EFI boot partition’.

Step 5: Add a new Linux partition.

Choose ‘Automatically partition the free space’ and choose ‘All files in one partition’.  Once the partitions are created, select the ext4 root partition (‘/’) and (optionally) change the mount options to include relatime.  This means that file timestamps aren’t updated automatically, which means less disk writes to the SSDs, thus increasing the lifetime of the disks. If you like, set a label for the root partition so you can see it in Windows later.  After you’re done, the partition disks screen should look like this, and you can hit ‘Finish partitioning and write changes to disk’:

Step 6: Go further in the Debian install process.

Don’t use a network mirror. Pick whether you want to send installed package information to Debian or not. Select Debian desktop and system utilities and continue.

Step 7: Fix grub install failure.

Jamey Sharp figured this out, thanks for his help!  It seems that the standard grub installer was confused by the “Intel Matrix Storage Manager” or IMSM, that’s firmware for booting to RAID setups.  Basically, the BIOS knew about the RAID setup, Linux (mdadm and the kernel) knew about IMSM, but grub didn’t. Grub didn’t know the BIOS used IMSM and could boot from the disk, so grub didn’t think the hard drive was bootable.  Silly grub!

Go to a shell by pressing CTRL+ALT+F2.  Type this command:

df /target

Look at the output of that command.  It will list the file system mount that corresponds to /target (the mount point of the root file system where Debian installs to).  It should look something like /dev/md126p6.  The ‘p6’ is one particular partition on the RAID array, but we want grub to use the whole disk, so we strip off the partition designation for the next couple of commands.  Type this command:

cat > /target/boot/grub/

and then type:

(hd0) /dev/md126

and hit CTRL+d to stop writing to the file.  Then go back to the installer (in the text installer, that’s ATL+F1, but it’s ALT+F5 in the graphical installer).  Hit ‘continue’ and ‘continue’ again to go back to the Debian installer main menu.  Again, hit enter on the ‘Install grub bootloader on the hard disk’.  The installation should complete successfully.

Step 8: Add grub UEFI boot option.

Unplug the USB key, and hit F2 to go back into the BIOS.  The Debian installer didn’t create a UEFI boot option for the grub installation, so we have to make one manually.  Go into the boot menu, and hit ‘Add New Boot Option’ and then ‘Add boot option’.  Pick a name for it (‘GRUB’). Under ‘Path for boot option’, there should only be one choice of filesystem, which starts with PCI. Under ‘Select a file to boot’ choose EFI and then debian and finally grubx64.efi.  Choose ‘Create’ and you’re done adding the boot option.  Hit ‘escape’ to get back to the boot menu.  You’ll need to change the boot option priorities to make grub the first boot option (which means you’ll need to go into the BIOS and change it back should you want to boot into Windows).  Save and exit the BIOS.

Step 9: Update your packages and install Intel wireless firmware.

Debian stable (which is running a 3.2 kernel) will boot into GNOME 3 compatitbility mode because it doesn’t have kernel or mesa support ffor the Haswell Intel graphics.  The kernel also doesn’t recognize the wireless PCI device.  You’ll have to use the USB network adapter and plug into an ethernet cable.

Add the Debian stable sources by editing /etc/apt/sources.list so that it says:

deb jessie/updates main contrib non-free
deb jessie main contrib non-free

You may want to use a different local mirror closer to you.

Run `aptitude` as root, and mark all packages that are upgradable. Mark the firmware-iwlwifi package for installation. And install all your new packages from Debian testing!

After a reboot, both wifi and graphics should be working correctly.

The Gentle Art of Patch Review

As the next round of the FOSS Outreach Program for Women (OPW) approaches, my mind turns to mentorship, and lessons learned when dealing with newcomers to open source projects.  Many open source contributors have been in the FOSS community for long enough to forget how painful their first experience contributing to a project was.  As the coordinator for the Linux kernel OPW internships, I get to help newcomers go through that experience every six months.  I’ve learned a lot about how we, as open source reviewers, maintainers, and mentors, can help newcomers during their first contributions, and I’d like to share some of the perspective I’ve gained from OPW.

The Newcomer’s Perspective

As a newcomer, you’ll come at the project with enthusiasm and determination to do your best to make a really good first contribution.  You’ll try to find all the documentation for the project you’re working on, and read through it, only to realize it’s completely outdated and incomplete.  You’ll ping mentors and ask questions, but you may not be able to reach the right person to answer your question.  So you do the best you can with the resources you find, cross your fingers, and submit your first contribution.

It’s common for newcomers to blame themselves when they make mistakes in their first contributions.  You’ll cringe, wring your hands, smack your forehead, or maybe even put your head in your hands.  Then you’ll sigh and try again.  No matter how good the documentation for contributing to the project is, how meticulous you are, you will slip up at some point.  And that’s fine, because you are going through a process of learning something new, and expanding your skills.  The most productive contributors see each mistake they make as a growth opportunity, instead of a personal failure.

The Maintainer’s Perspective

As a long-standing open source contributor, you may get contributions from newcomers all the time.  You’ll see several of them make the same mistakes over and over again, and if you have enough time, you’ll update your project documentation to help people avoid those mistakes.  Often you don’t have time, and the documentation doesn’t get updated.  Or you’ll think that something is so blindingly obvious that everyone should understand it, without realizing how much specialized experience you need to have that knowledge.

At some point in as a maintainer, you will be completely overloaded with contributions from both newcomers and familiar, trusted contributors.  It’s easy to review those contributions from long-standing contributors, because they know your expectations and the rules around contributing.  You trust them to write solid code containing very few bugs.  So you review the contributions from trusted contributors, and put off reviewing contributions from newcomers until you have a large block of time to thoroughly review the newcomer’s contribution.

It’s tempting to just go through the newcomer’s contribution from start to finish, commenting on every single thing they missed.  The maintainer’s mindset is, “Ok, I have time, I should share my knowledge with this person who is obviously missing some tribal knowledge they need to contribute to my project.”  From the newcomer’s perspective, what they experience is their contribution being ignored for days or even weeks, followed by a very long email full of nit-picky comments on coding style, criticism of their code structure, and even comments about their spelling and grammar.  Even if the review is fair and neutrally worded with a focus on their technical mistakes, it still feels very harsh.

We Can Do Better

How can we make this process better on both sides?  How can we make the first patch review less harsh, and still respect the maintainer’s valuable time?  Can we make turn around time on first patch review even shorter?  When I was the xHCI driver maintainer, I started experimenting with a different way of reviewing contributions from newcomers that I think might help address all three of these issues.

The Three-Phase Contribution Review

Instead of putting off reviewing first-time contributions and thoroughly reviewing everything in the contribution at once, I propose a three-phase review process for maintainers:

  1. Is the idea behind the contribution sound?
  2. Is the contribution architected correctly?
  3. Is the contribution polished?

You can compare these contribution review phases to the phases of building a new house or taking on a remodeling project.  The first phase is a simple yes or no on the architectural diagram, the big idea of the contribution.  The second phase is getting all the structural issues correct and making sure the plumbing and electrical all connect properly.  The third phase is making everything polished, sanding off the rough corners, and slapping on a coat of paint to match whatever color the bike shed is currently painted.

Phase One: Good or Bad Idea?

The first phase of the contribution review should only require a simple yes or no answer from the maintainer: “Is this contribution a good idea?”  If the contribution isn’t useful or it’s a bad idea, it isn’t worth reviewing further.  The best action in this case is to refocus the newcomer on a better idea or a completely different area they could work on.  Or open a discussion with the newcomer and other contributors as to what should be done to address the issue in a different way.

If the contribution is worthwhile, but you don’t have time to go onto the second phase of patch review, do NOT say nothing.  Instead, drop the contributor an email that says, “Thanks for this contribution!  I like the concept of this patch, but I don’t have time to thoroughly review it right now.  Ping me if I haven’t reviewed it in a week.”  This builds the newcomer up by expressing appreciation for the time and effort they put into creating this contribution, and lets them know they’re on the right path.  It also gives you incentive to actually move onto phase two, because the contributor will bug you again if you haven’t reviewed the contribution.

Phase Two: Is this Architecturally Sound?

In phase two, you review the contribution to see whether the code (and only the code) is architecturally correct.  Focus on whether the code is sound at an architectural level. Is the code behavior correct?  Are they modifying the right functions, or does the code need to be moved around?  Have they structured their build files correctly?  Do they need to refactor any code?  Do they need to get buy-in on the code structure from other maintainers?  Are there potential hazards or tricky parts of the code that the everyone needs to review carefully?

You will need to squash the nit-picky, perfectionist part of yourself that wants to comment on every single grammar mistake or code style issue.  Instead, only include a sentence or two with a pointer to coding style documentation, or any tools they will need to run their contribution through.  If their patch needs to be updated against a newer version of your project, or a different maintainer’s upstream repository, point that out.  Avoid nit-picking every instance where they violate your project’s contribution style rules. Your eyeballs may be bleeding from the number of camel case variable names or variables names that use variable type encoding, but take a deep breath and ignore that.  Let them explore the tools, documentation, and fix (most) of their mistakes on their own.

Double check and make sure the documentation and tools actually document the mistakes you see in the code, and if they don’t, update them.  Your documentation and tools should clearly spell out the format of a valid contribution, and if they don’t, you need to address that technical documentation debt.  If you don’t have time to address that technical documentation debt, tell the contributor what needs to be fixed, and see if they have the time to address it.  Don’t be silent just because you don’t have time to fix it.

Phase Three: Is the Contribution Polished?

From a newcomer’s perspective, after phase two is complete, they’re hooked on getting their contribution in.  You’ve worked with them on an architectural level, and they know you want to accept their contribution.  They’re emotionally invested in getting their contribution into your project, and they’ve learned a lot by going through a couple contribution revisions.  Thank the contributor for being patient this far and remind them that you’re willing to accept the contribution, but they need to clean up a few small things first.

Now is the time for phase three: the polishing phase.  In this phase, you finally get to comment on the meta (non-code) parts of the contribution.  Correct any spelling or grammar mistakes, suggest clearer wording for comments, and ask for any updated documentation for the code.  It doesn’t make sense to create documentation for the code until the code is structurally sound, which is why the documentation phase comes last.  You may also need to encourage them to write a better commit message, mark the patch to be back ported to stable versions of your software, or Cc the right maintainers.

As a newcomer, this third and final phase can be more painful than the architectural critiques in the second phase.  Many young programmers lean towards science, math, and technology because they feel like they don’t excel in writing or people skills.  Contributors may also be writing in a language that is not their native tongue.  That’s why this nit-picky phase comes last, so that the contributors get over their embarrassment after they’re emotionally invested in getting their patches into your project. Be gentle, patient, and compassionate.  As a maintainer, you may suggest comments or patch descriptions that you hope the contributor simply copy-pastes into their patch.  You may have to just edit the patch description yourself.

How Does This Benefit Maintainers?

I’ve found that this three-phase contribution review process saves me (as a maintainer) a lot of mental stress.  The first phase is a simple yes or no question (“Is this a good or bad idea?”), which means I don’t procrastinate on reviewing first time contributions.  Being up front with contributors about not having time to review their contribution can initially feel like shirking duties, but I feel a mental load lifting when I get over that and simply say something like, “Hey, this patch looks like a good idea, but I don’t have time to review it right now. I’m heading to a conference next week, and need to work on my slides.  Can you ping me in two weeks if I haven’t reviewed your code?”

If you’re honest with contributors about your time commitments, they know their contribution is wanted, and they can pass your time commitments onto their boss or program manager.  Also, if you find yourself delaying contribution review often, it may be a sign you need a co-maintainer or you need to ask other contributors to do more code review.

The absolute worst thing you can do during phase one is be completely silent.  The newcomer doesn’t know whether their contribution is a good or bad idea, and any discussion that needs to happen with other maintainers to modify the fundamental concept never happens.  That’s why phase one is a simple yes or no answer, in order to get the code review ball rolling.

I’ve also heard some maintainers state that they want to dump all their review into phase two.  They have precious little time, and they fear they will forget specific feedback if they break code review into several phases.  I will often notice nit-picky coding style issues during my architectural review, and I will make a note to myself to nip that pattern in the bud in phase three.  Keeping a dated text file per patchset or even replying to the patch but only adding your own email address in the To field will help you keep track of the issues that need to get addressed in phase three.

Often by the time you get past the architectural discussion in phase two, you’ll find many of your initial nit-picky criticisms were addressed.  A conscientious contributor will look at the documentation and tools you point out in phase two, and will address most of them in their next revisions.  What will be left for the third (polishing) phase is mistakes made because of undocumented tribal knowledge, or rules that are undocumented because they differ from maintainer to maintainer within the project.

Try It Out!

The following three-phase contribution review process should help both maintainers and newcomers:

  1. Is the idea behind the contribution sound?
  2. Is the contribution architected correctly?
  3. Is the contribution polished?

Maintainers will be able to respond more quickly to contribution review if they focus on just answering one question during the first phase of review: “Is this a good or bad idea?”  Newcomers will be encouraged by a timely email that states whether the basic concept of their patch is sound.  Both the maintainer and the contributor benefit from splitting the actual code review into an architectural discussion, followed by a polishing phase.  Maintainers will save themselves time if they simply point out documentation and tools contributors should use to ensure their contribution is up to community standards, and the nit-picky polishing phase is saved for after the newcomer is emotionally invested in getting their contribution into your project.

I think this process should both save maintainers time, and decrease the bounce rate for newcomers, so I encourage you to try it out!