💾 Archived View for gmi.identity2.com › oscon_2014.gmi captured on 2023-06-14 at 14:01:15. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2023-01-29)

-=-=-=-=-=-=-

Open Source Talks

OSCON 2014 speaker interviews

Open Voices, Issue 13

[Opensource.com](http://www.opensource.com/)

Copyright

Copyright © 2014 Red Hat, Inc. All written content licensed under a

[Creative Commons Attribution-ShareAlike 4.0 International

License](http://creativecommons.org/licenses/by-sa/4.0/).

Introduction

The O'Reily Open Source Convention—or

[OSCON](http://www.oscon.com/oscon2014), as it's now popularly known—is

one of the world's premier open source events. For more than a decade,

open-minded developers, innovators, and business people have gathered

for this weeklong event, which explores cutting edge developments in the

open source ecosystem. And 2014 was no exception.

Eagerly awaiting another year of open source wonders, the Opensource.com

community caught up with a handful of notable OSCON speakers to gather

behind-the-scenes stories about their passions for open source. This

book collects the interviews we conducted.

Open source's identity crisis (interview with Karen Sandler, Software Freedom Conservancy)

Bryan Behrenshausen (originally published July 2014)

For Karen Sandler, software freedom isn't simply a technical matter. Nor

is it a purely ideological one.

It's a matter of life and death.

Sandler, Executive Director of the non-profit [Software Freedom

Conservancy](http://sfconservancy.org/overview/), says software freedom

became personal when she realized her pacemaker/defibrillator was

running code she couldn't analyze. For nearly a decade—first at the

[Software Feedom Law Center](https://www.softwarefreedom.org/), then at

the [GNOME Foundation](http://www.gnome.org/foundation/) before

Conservancy—she's been an advocate for the right to examine the software

on which our lives depend.

And, at this year's [Open Source

Convention](http://www.oscon.com/oscon2014), held July 20–24 in

Portland, Oregon, Sandler will discuss a serious impediment to that

cause: the way software developers negotiate their relationships with

the projects to which they contribute. Difficulties advancing software

freedom are part and parcel of this larger "identity crisis," she says.

We caught up with Sandler before she takes the stage at OSCON 2014.

What first attracted you to the issue of software freedom?

Well, at first, in the mid 1990s, when I was in school at Cooper Union,

free software was a really interesting thing we were playing with in the

computer lab. I remember thinking "this Linux thing is a neat idea ...

too bad it probably won't go anywhere." Then I went to law school and

became a cross-border securities lawyer, leaving most of my tech

background behind. Eben Moglen (who had been one of my professors at

Columbia Law School) formed the Software Freedom Law Center (SFLC) just

when I had decided that I wanted to do something new.

I remember going into my first day thinking "open source is cool"; I

wasn't really focused on the ideology. Working with SFLC's passionate

clients opened my eyes to the important issues, and the ideology really

hit home when I was diagnosed with my heart condition and prescribed a

pacemaker/defibrillator. My life depends on the proprietary software

that is literally connected to my heart. I can't review the code, and I

know that there have been no proper procedures for its review by anyone

else. What software does your life depend on? It may not be implanted in

your body, but it may be controlling your car, helping you choose your

democracies, and running your stock markets. It is impossible now for me

to see software freedom as anything other than of core importance to our

society as a whole.

Your professional work as an advocate for free and open source software

began at the Software Freedom Law Center. Then you transitioned to the

GNOME Foundation and, now, to the Software Freedom Conservancy—all in

the span of about a decade. What ties all these positions together?

To be honest, it's funny to hear those as being separate\! At SFLC I was

providing the legal help to defend software freedom. GNOME was one of my

clients. When I saw the first screenshots of GNOME 3 I knew that this

new vision for the desktop was the way to go. I knew it would be

impossible to get any mainstream adoption of free software without there

being good looking, easy-to-use interfaces for new users, and I wanted

to do everything I could to help.

Also while I was at SFLC, I co-founded Conservancy. We knew that free

software projects needed a nonprofit home and the infrastructure to

operate and that it was impractical for every project to form its own

organization. I served as an officer of Conservancy since its inception.

So with my move from GNOME to Conservancy, I've swapped my volunteer and

paid roles. I'm very committed to both organizations and am pleased to

say that I've recently been elected to the GNOME Board of Directors. So,

I consider all three jobs part of the same important work for software

freedom.

What's the most important lesson about free and open source software

projects you learned while working as Executive Director of the GNOME

Foundation?

There are so many lessons that I learned; the GNOME community is so

amazing. One of the most important lessons is that good nonprofit

ideology and governance is essential for a healthy free software

community over time. It's important to lay down good groundwork early in

a project to avoid control by one or two companies. While there has been

a lot of interest by companies in GNOME, when it came time to setting up

an organization the community decided on a 501(c)(3) charity. They got

companies to participate in a non-voting advisory board which provided

support to the Foundation but kept control in the hands of the

community.

Conservancy is also a charity. In order to join, projects must be

reviewed by our Evaluation Committee, and once they join they cannot

allow any corporate control of their project. We even have project

leaders subject to a formal Conflict of Interest Policy (which usually

only applies to board members). Joining Conservancy is a statement that

a project really is committed to the public good—and that they're

willing to commit their assets to it permanently.

Also, a focus on ideology and a commitment to no corporate control

motivates the community to do more, and for the right reasons.

What are your top priorities as you begin your work for the Software

Freedom Conservancy?

Raising money to fund the good work of the organization. Want to donate?

I think the main thing I'd like to do is grow the organization to serve

more fantastic free software projects and to help explain why their

charitable nonprofit structure is so meaningful. Conservancy has been

doing so much and with such little staff. We launched a project to work

on free software that will help nonprofits with their

accounting—something that proprietary software doesn't do well and

exactly the kind of software that should be free and open. Right now

most nonprofits rely on proprietary software and pay exorbitant

licensing fees for software which fundamentally contradicts their

underlying missions of charity and sharing. Worse still, the software

that exists doesn't really do a great job at complicated nonprofit

accounting. We'd like to make it so that nonprofits can work together to

improve the situation. If we can get this project up and running it has

the potential to save the nonprofit sector millions in licensing fees

every year.

In a recent episode of your podcast, "Free as in Freedom," you explained

the links between software freedom, technological access, and social

justice. Tell us about those connections. Do we need to change the way

we explain about free and open source software if we're going to

successfully advance the cause?

Definitely. I read the article ["The Meme Hustler" by Evgeny

Morozov](http://thebaffler.com/past/the_meme_hustler) and it really got

me thinking. He talks about how the free software movement was cleverly

manipulated into the open source marketing campaign. I've never been one

to get hung up on terminology, but I can see in retrospect how that

marketing impacted my own thinking at the time. I don't care if we use

the term "open source" or the term "free software" but I do care that

we're talking about freedom.

Recently, when watching a few episodes of the TV show Silicon Valley, I

got a little sick when I saw that the companies represented on the show

all talked about making the world better through whatever profit driven

software product it was that they were bringing to market. Being clear

about our ideological goals and clearly connecting the dots between

software freedom and social justice is important. Most of our developers

know that software freedom is right but it's very hard to articulate,

even for someone like me who's been trying to advocate to nondevelopers

for a while.

I've been thinking a lot about the messaging in our movement because if

we can't articulate why what we do is important, what are we doing? As

we integrate software and in technology in general into our lives, it's

becoming clear that we are making a grave mistake by letting single

companies control the systems we rely on. We're building complex

infrastructure where everything interacts with everything else—and we're

only as safe as our weakest link. Software freedom is not the only piece

in this puzzle but it is the cornerstone to ethical technology.

At OSCON, you'll speak about a persistent "identity crisis" in free and

open source software communities. What, specifically, do you think is in

crisis? What forces or factors contribute to these identity issues?

In free and open source software we all wear many hats. We use the term

"we" to mean a nonprofit community of volunteers one moment and then

"we" to mean our employers the next. Free software needs its

contributors to be honest with themselves and each other about what

their interests are and who they are speaking for at different times. We

need better procedures in place for our community governance. How can we

willing to let corporate influence overrun us? But I'll have to stop

there as I don't want to spoil the talk\!

From zero to Spark Core in two years (interview with Zach Supalla, Spark)

Jen Wike (originally published July 2014)

[Spark](https://www.spark.io/) is a company inspired by Zach Supalla's

deaf feather.

"My dad has lights in his house that flash when someone rings the

doorbell. I wanted to make lights that would flash when my mom texted

him, so they could stay in better touch," explains Zach.

This led Zach to create a connected lighting product that his team

launched on Kickstarter in late 2012 called the Spark Socket. The Socket

was unsuccessful, but afterwards Zach pivoted his team to focus on

developing tools for others building connected products.

How long have you been working with electronics?

I'm pretty new to the electronics world. I built my first prototype in

January 2012, and that was the first time I touched electronics. I was

encouraged by the amazing Arduino community and the wealth of resources

available online from places like SparkFun and Adafruit. About a year

later, I was making electronics designs on par with something that

somebody with an Electrical Engineering degree might have created.

What are your products have you made related to the Internet of Things?

Our primary physical product is the [Spark

Core](http://spark.github.io/), which is a development kit for Wi-Fi

connected products. It uses Wiring, the programming language made

popular by Arduino, but integrates a Wi-Fi module and a cloud service to

make it much more powerful and easier to use to build a connected

product. The Spark Core hooks to Spark OS, which is our full-stack open

source operating system for connected products, which extends from the

device to the cloud to the user.

Our tools are innovative because we've created a system that makes it

extremely easy to build very [powerful connected

products](http://spark.hackster.io/), and bring those products to

market. We've thought of everything from ease of use to security and

scalability so that our customers can focus on their products instead of

on the underlying infrastructure.

You launched the Spark Core on Kickstarter. What do you think it and

crowdfunding for open products?

We launched the Spark Core on Kickstarter because we wanted to see what

the demand looked like (which turned out to be quite high). We were

asking for $10K, and we raised nearly $600K in 30 days\!

We also wanted to build a community around our product and gather

feedback before we went to manufacturing. We would absolutely do

Kickstarter again in a heartbeat; it was an incredible experience that

helped define our company. We love crowdfunding because it completely

changes the dynamics of creating a product; you can bring in an audience

earlier, which leads to a better product, and the whole process is so

much less risky because you can figure out demand so much earlier in the

process.

I think what Kickstarter and the amazing products coming out of it prove

is that there are huge opportunities available for entrepreneurs who

want to bring a product to market. That means that so many "Makers" who

spend their weekends tinkering with hardware could make it their life's

work by starting a business around their ideas.

As for open source, I think that the electronics world has been

proprietary for a very long time, but open source is taking its hold,

and will eventually play a huge role, just like it does in software. The

Internet is built on open source underpinnings like GNU/Linux, and I

hope that soon the hardware world will be too.

What's your current project? What are you making better?

Everything. We're working on making our platform more reliable and

easier to use, and soon we'll be thinking about how to extend beyond

Wi-Fi and into other wireless technologies.

Give us a sneak peek into your OSCON 2014 talk.

At OSCON I'll be talking about our "[open source Nest-alike

thermostat](http://www.hackster.io/cazzo/building-an-open-source-nest),"

which we built earlier this year during a 24 hour hackathon. I hope to

showcase how a product is built and introduce the audience to open

source hardware tools like ours by walking through how we built a

powerful prototype in a day.

Building, deploying, and distributing software with JFrog (interview with Yoav Landman, JFrog)

Travis Kepley (originally published July 2014)

Founded in 2008, JFrog provides open source solutions for package

repositories and software distribution aimed at a new breed of

developers. With a focus on open source and the burgeoning cloud scene,

JFrog has garnered their fair share of awards and press from industry

heavyweights and communities alike.

Yoav[ Landman](https://www.linkedin.com/pub/yoav-landman/0/847/559),

co-founder and CTO of JFrog, took some time prior to his trip to [OSCON

in Portland this year](http://www.oscon.com/oscon2014) to chat with me

about communities, open source, and some lessons learned on JFrog's

journey so far.

First things first, I love that you are not shy about your open source

heritage. The JFrog website homepage displays the Artifactory Open

Source information. Can you fill us in on how JFrog leverages the

greater community to build your solutions?

JFrog constantly interacts with many open source communities. This helps

us make our products better by receiving constant feedback faster than

we could ever have wished. Bugs, feature requests and random rants or

praises via Jira, Twitter or Google+ usually land very quickly on open

ears and are discussed and resolved publicly, which leads to a faster

resolution.

Another thing we do is provide free service to open source communities,

like Spring (Pivotal), Scala, Vagrant, Ruby, Groovy, Vertx and Jenkins.

We are hosting their build artifacts on Artifactory Online and/or are

responsible for the distribution of their OSS packages to end users via

[Bintray.com](https://bintray.com/). This is the best way to stress-test

our products, discover performance issues and other bugs and demonstrate

to our paying customers how our solutions scale. The load and challenge

created by servicing very popular open source projects is often vastly

greater than that of commercial business.

Speaking of, if you had to describe the Artifactory community using only

three adjectives and a color, how would you describe it?

Well-connected, vibrant, and eager to make things better.

Color: Green of course\! What, else?

Simon**](http://betanews.com/2014/06/13/the-future-of-open-source-speeding-technology-innovation/)**,

co-founder of JFrog, about the power of the open source community. What

would you say is the biggest lesson you personally have learned from the

greater open source community?**

Open source communities are the main drivers of innovation nowadays. It

is the added value that commercial companies put on top of the open

source that make them successful. The pre-condition, nonetheless, is

having a core successful open source project or platform that has gained

a large-scale community. You can find many examples of that today, where

rapid community adoption has been the springboard for a successful

company.

Also, the way I see this, open source today is not limited to open

source software, but also to forming open source communities and

providing platforms that serve and assist the growth of such

communities. Examples are platforms such as GitHub and Bintray.

What has open source provided JFrog that gives you a leg up on your

competition?

Quality, instant feedback on usability and bugs; being able to become

better by answering the production needs of large scale communities that

others don't get to serve; having many friends all over the world that

love JFrog for its products (as well as people) and give us the greatest

unbiased, candid PR.

Can you give us some insight on what OSCON goers can expect from JFrog

at the event? What do you hope users of your software can gain from

visiting you guys in person?

On Tuesday we are going to have 2 great talks, openly sharing our

experience, challenges, and sometime frustration in building software

that scales. We will be demoing Bintray and Artifactory showing some new

exciting features, such as Bintray for Business and Artifactory's new

support for NPM and Debian packages. You are also welcome to join us in

an Open Hours discussion about continuous integration and delivery on

Wednesday. And, as usual, we'll be giving away our highly sought-after

SuperFrog tee's in the booth.

Given the nascent nature of cloud and JFrog's position in cloud

architecture, what are some of the dangers you see as closed source

companies start to enter the fray? How can we guarantee fair access to

technology and information as we move software away from end-users'

environments and control?

I believe the cloud will have an opposite effect on how things are done,

causing things to be more open. With competition between cloud providers

comes the need to have a reasonable data-out option. That means less

vendor lock-in and more design by means of service abstraction.

Another rising need is having to have backup plans in the form of

multi-cloud deployments and integration with private clouds.

These requirements have already led to open standards in service

abstraction, and virtualization from development to production, with

frameworks like Vagrant, Docker, etc. So at the end of the day, things

are going to become more open, and companies that will try to enforce

closed source stacks will not manage to stay in the game. Even giants

like Microsoft have realized that and are embracing open source, either

by open sourcing their tools or by promoting standards like Chocolatey

and NuGet that push for open source consumption and running open source

stacks on Azure.

It's better to share with functional programming (interview with Katie Miller, Red Hat)

Robin Muilwijk (originally published July 2014)

Katie Miller is a Developer Advocate at Red Hat for the open source

Platform as a Service, [OpenShift](https://www.openshift.com/), and

co-founder of the [Lambda Ladies](http://www.lambdaladies.com/) group

for women in functional programming. She has a passion for language and

linguistics, but also for the open source way:

I have a Red Hat sticker on my laptop that simply says: *It's better to

share.*

In this interview, [Katie](https://twitter.com/codemiller) shares with

me how she moved from journalism to a job in technology. Also, how she

got introduced to functional programming, the Haskell programming

language, and how open source is part of her daily life.

What path did you travel to go from journalist to software engineer?

Writing and technology are both longstanding themes in my life. After

school, I considered both journalism and information technology degrees.

My love of language and linguistics and an unfortunate lack of

confidence in my technical skills tipped the balance in favour of

journalism. I don't think I was aware of computational linguistics as an

option; it would have been the perfect blend. This is one reason why I

take part in activities to inform young people about opportunities in

the IT industry, such as the [Tech Girls are Superheroes

campaign](http://www.techgirlsaresuperheroes.org/) and IBM EXITE Camps

for teenage girls.

I worked in the news media for more than seven years and held a variety

of roles. Managing a news website helped me to realise how much I missed

tech, and I decided to return to university to study for a Master of IT

degree. I rediscovered my long lost passion for programming and launched

a new career as a software engineer. Last year I was given the chance to

combine my communications and engineering skills by taking on a

Developer Advocate role for Red Hat's open source Platform as a Service,

OpenShift, and given my background it seemed like a perfect fit.

Tell us a bit about the Haskell programming language and the concept of

functional programming.

My interest in Haskell and functional programming (FP) was sparked by a

very passionate professor during my Masters. The subject covering FP and

logic programming had been removed from the curriculum, but this

lecturer offered to teach keen students the content in covert sessions

in the campus library. After university, I joined the Brisbane

Functional Programming Group and found six other FP novices with whom to

work through the fabulous book [Learn You A

Haskell](http://learnyouahaskell.com/), from cover to cover.

Functional programming is a paradigm that treats computation as the

evaluation of pure, mathematical-style functions. One of the big

advantages of this approach is that it allows you to reason about your

code, in the same way you can reason about maths equations. Haskell is a

great language in which to learn FP concepts as it is purely functional

and statically typed. This means you can spend a lot of time arguing

with the compiler but once your program compiles it has a high

likelihood of being correct. I think this is a big improvement on

spending your time tracing values in a debugger and arguing with your

boss about how that bug made it to production.

![](https://opensource.com/sites/default/files/images/life/oscon-katiemiller.jpg)

(Katie Miller at the Tech Girls are Superheroes book launch. Photo by

David Ryan.)

Lets take a jump from Haskell and functional programming to the future

of technology. If I say the future lies with our children, would you

agree? Are CoderDojos and other programs like Europe Code Week enough?

Or should we do more to teach our children digital skills?

The ability to code gives you the power to choose what your computer

does for you, rather than relying on the interfaces others have created.

I think all children should be given access to this power through

programming classes. CoderDojo and many other volunteer-led programs are

doing fantastic work on this front, but what I would really like to see

is programming introduced to school curricula. I agree our future lies

with our children, and I think we should be equipping them to be the

inventors and masters of tomorrow's technology, rather than just its

users. Estonia and the United Kingdom have moved to introduce

programming education in schools, and I would like to see the same done

in Australia and other nations. Not every child will become a software

developer, just as not every child becomes a scientist or mathematician,

but I think programming should be part of the life skill set that is

taught.

open source do you like?**_*

I was first exposed to open source in university, and that is also where

I was introduced to Linux. The open source way made sense to me and was

a major reason why in a city roughly equally split between Java and

.NET, I chose to make my graduate job a Java role. My first IT employer,

a financial institution, was fairly progressive when it came to open

source and the role gave me the chance to use several FOSS projects. I

had just started running Fedora on my personal laptop and built my first

custom kernel when I decided to go for a job at Red Hat.

I love the way open source culture draws together people of all kinds

from around the world, to change the world. It gives everyone the chance

to share their ideas and make contributions that really can make a

significant impact. Working for Red Hat gives me the opportunity to

participate in that global collaboration as part of my everyday work,

which is an amazing privilege. When I write code, at work or play, the

default is always to open source, and thankfully I don't have to jump

through any hoops to do that. It still always gives me a buzz when

someone forks my code and submits a patch, or I have a pull request

accepted. I have a Red Hat sticker on my laptop that simply says, *It's

better to share*. That sums it up for me.

Can you give us the scoop on your OSCON 2014 talk?

I try to have a bit of fun with my conference presentations, and this

one is no exception. I will be attempting to explain a series of terms

commonly thrown around by FP fans in just three minutes each, which is

going to be a serious challenge. I've called the talk [Coder Decoder:

Functional Programmer Lingo Explained, with

Pictures](http://www.oscon.com/oscon2014/public/schedule/detail/35493),

but I'm not much of an artist, so if nothing else people can come along

to be entertained by the somewhat peculiar illustrations.

Why is Docker the new craze in virtualization and cloud computing? (interview with James Turnbull, Docker)

Jodi Biddle (originally published July 2014)

It's [OSCON](http://www.oscon.com/oscon2014) time again, and this year

the tech sector is abuzz with talk of cloud infrastructure. One of the

more interesting startups is Docker, an ultra-lightweight

containerization app that's brimming with potential

I caught up with the VP of Services for Docker, James Turnbull, who'll

be running a Docker crash course at the con. Besides finding out what

Docker is anyway, we discussed the cloud, open source contributing, and

getting a real job.

You've written a few books on various Linux subjects. How did you first

discover Linux? What makes you so passionate about it?

I think I first stumbled over Linux in the mid-90s shortly after Debian

was released. I'd worked with OS400, VAX/VMS, and SunOS previously but

always in corporate environments. I don't think I immediately clued into

exactly how powerful this whole "open source" thing actually was. When I

discovered Linux, all of a sudden I had a desktop spec computer running

the same tools and services that powered the Internet. It was pretty

mind blowing. And importantly it was free. I didn't need to buy

expensive hardware and operating system software to do these cool

things. Then I realized that not only was it free but I got the

sourcecode too. If something was broken or I wanted something more I

could actually fix it (or at least take a stab at fixing it) or talk to

someone else who could fix it. That feeling of ownership combined with

the embryonic communities building around open source just amazed me.

I've loved open source ever since.

Your bio says "for a real job" you're the VP of Services for Docker. Do

you consider your other open source work a hobby?

That's mostly a joke related to my partner. Like a lot of geeks, I'm

often on my computer, tapping away at a problem or writing something. My

partner jokes that I have two jobs: my "real" job and my open source

job. Thankfully over the last few years, at places like Puppet Labs and

Docker, I've been able to combine my passion with my paycheck.

Open source contributors often speak about their work in that way; the

lines between hobby and profession are very blurred in open source. Do

you think that has a positive or negative impact?

I think it is both positive and negative across a lot of domains. It's

positive that solutions to problems we solve in our jobs (like building

tools, fixing bugs, writing documentation) can be shared with others and

hopefully make someone else's life easier or get them to the pub faster.

It's also negative in that being passionate about something so close to

my day job exacerbates that sense that you're "always on."

I'm also conscious of how those blurred lines are impacting the

diversity of our industry and open source communities. There is a

perception, certainly in the startup world, that a good developer is one

with a GitHub profile and who contributes to open source. I'm lucky

enough to have the time, money, and education to be able to contribute

to open source. But a lot of others don't have that privilege and that

is at least partially responsible for shaping the very narrow

demographics of many open source communities: white, male, educated.

That perception of a "good" developer has become somewhat of a closed

hiring loop and helps perpetuate the monoculture in open source and our

industry more broadly. I think that's something we desperately need to

change.

How did you become involved with the Docker project?

I came across Docker not long after Solomon open sourced it. I knew a

bit about LXC and containers (a past life includes working on Solaris

Zones and LPAR on IBM hardware too), and so I decided to try it out. I

was blown away by how easy it was to use. My prior interactions with

containers had left me with the feeling they were complex creatures that

needed a lot of tuning and nurturing. Docker just worked out of the box.

Once I saw that and then saw the CI/CD-centric workflow that Docker was

building on top I was sold.

Docker is the new craze in virtualization and cloud computing. Why are

people so excited about it?

I think it's the lightweight nature of Docker combined with the

workflow. It's fast, easy to use and a developer-centric DevOps-ish

tool. Its mission is basically: make it easy to package and ship code.

Developers want tools that abstract away a lot of the details of that

process. They just want to see their code working. That leads to all

sorts of conflicts with SysAdmins when code is shipped around and turns

out not to work somewhere other than the developer's environment. Docker

turns to work around that by making your code as portable as possible

and making that portability user friendly and simple.

What, in your opinion, is the most exciting potential use for Docker?

It's definitely the build pipeline. I mean I see a lot of folks doing

hyper-scaling with containers, indeed you can get a lot of containers on

a host and they are blindingly fast. But that doesn't excite me as much

as people using it to automate their dev-test-build pipeline.

How is Docker different from standard virtualization?

Docker is operating system level virtualization. Unlike hypervisor

virtualization, where virtual machines run on physical hardware via an

intermediation layer ("the hypervisor"), containers instead run user

space on top of an operating system's kernel. That makes them very

lightweight and very fast.

Do you think cloud technology development has been heavily influenced by

open source development?

I think open source software is closely tied to cloud computing. Both in

terms of the software running in the cloud and the development models

that have enabled the cloud. Open source software is cheap, it's usually

low friction both from an efficiency and a licensing perspective.

How do you think Docker will change virtualization and cloud

environments? Do you think cloud technology has a set trajectory, or is

there still room for significant change?

I think there are a lot of workloads that Docker is ideal for, as I

mentioned earlier both in the hyper-scale world of many containers and

in the dev-test-build use case. I fully expect a lot of companies and

vendors to embrace Docker as an alternative form of virtualization on

both bare metal and in the cloud.

As for cloud technology's trajectory. I think we've seen significant

change in the last couple of years. I think they'll be a bunch more

before we're done. The question of OpenStack and whether it will succeed

as an IAAS alternative or DIY cloud solution. I think we've only touched

on the potential for PAAS and there's a lot of room for growth and

development in that space. It'll also be interesting to see how the

capabilities of PAAS products develop and whether they grow to embrace

or connect with consumer cloud-based products.

Can you give us a quick rundown of what we should expect from your

Docker presentation at OSCON this year?

It's very much a crash course introduction to Docker. It's aimed at

Developers and SysAdmins who want to get started with Docker in a very

hands on way. We'll teach the basics of how to use Docker and how to

integrate it into your daily workflow.

Girls' skills are needed in tech (interview with Jennifer Davidson, ChickTech)

Jen Wike (originally published July 2014)

ChickTech is based in Portland but plans to be nationwide by 2016. After

interviewing Jennifer Davidson about how ChickTech gets girls involved

in tech, I have high hopes it's even sooner.

The non-profit targets girls who would never nominate themselves to

participate in a tech workshop and who wouldn't dream of a career in

tech. Why? Because they've never had someone believe their skills were

valuable in that world. I believe that our society understands that

girls' skills are needed in tech, we've just needed support for our

girls like we've shown for our boys.

At ChickTech, women like Jennifer Davidson and

[ChickTech](http://chicktech.org/) founder Janice Levenhagen-Seeley,

give girls a chance. A jumping off point. A view into a world that can

also be theirs.

ChickTech will host [Open HeARTware

workshop](http://www.oscon.com/oscon2014/public/schedule/detail/34678)

at OSCON 2014 on July 20.

What drove the creation of ChickTech, and what does it do?

We started ChickTech because we've experienced, first-hand, the lack of

gender diversity in tech careers. Without this gender diversity, women

don't have a workplace that helps us feel like we "belong." So we

decided to create a nonprofit that would change that by creating a

community of support for women and girls, provide them with fun and

exciting workshops to improve their confidence and abilities, and change

tech culture for the better. Our general mission is to get more girls

and women in tech and to retain the women who are already there.

We're based in Portland, Oregon, but we're quickly expanding to cities

around the United States. Our current focus is ChickTech: High School, a

year-long program for 100 high school girls to participate in

project-focused tech workshops, internships at local tech companies, and

a mentorship program with local tech professionals.

Tell me about your role as program manager and how the leadership teams

work.

I help with every aspect of [ChickTech](http://chicktech.org/), from

helping Janice, the Executive Director, shape ChickTech's vision to

event planning to grant writing. However, a normal program manager would

head up a leadership team in a specific city.  So, in addition to

helping with everything else, I also head up the leadership team in

Corvallis as we plan a weekend-long event at Oregon State University.

It'll be the first event where the high school girls will get to stay in

the residence halls on campus. We're excited to provide a first-time

college experience for many participants.

Each ChickTech chapter (currently Portland, Corvallis, and San

Francisco) has a leadership team whose job it is to organize volunteers

to run events for the ChickTech: High School program. We implemented

leadership teams with a goal of building community amongst local tech

professionals and university students; these volunteers work together to

make positive change in their communities by introducing girls to

technology.

What are you trying to get across with the ChickTech: High School

program?

The main goals are to show girls that they *_****do****_* belong in

tech, that they *_****can****_* do it, and that their skills and talents

are absolutely needed. ChickTech seeks to increase participants'

confidence in their tech skills and seeks to build a tech community for

participants to provide a sense of belonging and support. In the United

States, many girls are brought up to believe that "girls can't do math"

and that science and other "geeky" topics are for boys. We break down

that idea. We fill a university engineering department with 100 high

school girls—more girls than many engineering departments have ever

seen. The participants can look around the building and see that girls

from all backgrounds are just as excited about tech as they are.

We see such positive change in girls over just a 2-day event. We don't

want to change the girls to fit the current technology culture, it's to

ready them to improve that culture and their communities with our help.

The ChickTech: High School program starts with a 2-day kickoff event.

Each girl participates in 1 workshop for a full 2 days, ranging in topic

from robotics to user experience. Workshops are developed in

collaboration with ChickTech leadership, local tech professionals, and

university students. At the end of the workshop, girls have a customized

project that they can take home. ChickTech doesn'tbelieve in boring

tutorials or panels. Instead, ChickTech volunteers work with the girls

to create a project of their very own. Something that the girls can be

proud of, that they can take home as a reminder of what they learned and

what they're capable of.

Well over 60% of ChickTech: High School attendees have never

participated in any tech-related event (programming, robotics, etc.)

before. And to take a girl who has no exposure to tech before the 2-day

event, to have a customized, operating robot by the end of that weekend

is quite powerful.

After the 2-day event, we have monthly workshops that revisit some

topics from the 2-day workshop, but also give them the chance to explore

how technology has revolutionized many industries, from medicine to

fashion to films. Our mentorship program is in its first year in

Portland, and it's having a great positive impact. We do team-building

activities between mentors and mentees, and we encourage mentors to

bring their mentees to user groups and tech events around Portland to

really help them feel like they have a welcoming tech community.

Do you use open source software and hardware in your high school and

adult workshops?

We use and promote open source tools and products because we want to

lower the cost barrier to entry for the girls. We want them to be able

to install and use these tools when they get home to continue their

projects. For example, our website design and creation workshop uses

Drupal, and we use open hardware (Arduino) for our soft circuits

workshop. In our computer construction course, we teach girls how to

install Ubuntu on a desktop machine that they built (and that they get

to take home\!).

Personally, I am passionate about open source and open knowledge, and I

think it's a fabulous way to share knowledge and ensure that people from

all backgrounds have access to tech.

What is your experience working with teenage girls?

My first role in ChickTech was as a "Designing Experiences" workshop

lead. I worked with a team of volunteers to create the curriculum and

conduct a workshop related to User Experience. Conducting this workshop

was an example of how ChickTech does not only positively influence the

girls who attend, but also the volunteers. My confidence went through

the roof because I learned that I knew enough about my field to teach

it, and teach it in a way that high schoolers would understand and find

interesting. I had a great time with the girls in the workshop, and it

was inspiring to hear things like, "I could totally see myself doing

this for a job\!" and "People get paid for this? Cool\!" The User

Experience workshop involves local non-profits as clients, and the girls

are grouped into teams and tasked with designing a solution to the

non-profits' tech issue. We have the clients pitch their ideas to the

girls, and the girls get to pick which non-profit they work with.

They work on real problems, with real clients. One of our clients,

Wearshare, found the experience so useful that they are hosting

ChickTech interns this summer.

The part of ChickTech: High School that is both unique and extremely

effective is the fact the girls are nominated to attend. This sends them

the message that they were selected for this special opportunity because

someone believes in them. We ask high school career counselors and

teachers to nominate up to 15 girls from their school who they think

would excel in tech but who aren't engaged in tech opportunities yet. In

addition to reaching out to public schools, we also reach out to the

Boys & Girls Club, and alternative schools. Because we specifically ask

schools to find girls who have no tech experience, and who may not

consider tech without an extra push; we reach the girls who will not

self-nominate and have a very low chance of choosing a technology career

without us. We also ask that at least 33% of nominees are eligible for

free/reduced lunch. To make this program accessible to all, our events

are completely FREE for attendees, and we provide meals and

transportation for those who need it. It's \*never\* too late to become

a technology creator, and we pass that on to these amazing high school

girls who may very well find their passion is in tech.

Many girls in our program do not realize the many choices they have in

tech. The big problem is that, before entering ChickTech, they didn't

have a community that knew or cared about tech. We provide that. We've

seen girls write about ChickTech in their college essays and switch

their career path from dental assistant to engineer. ChickTech works,

and it's because of the passion of the founder, the passion of our

volunteers, and our excitement about creating a safe, welcoming, and

inspiring environment for these girls.

ChickTech coordinates workshops for adults too. Who leads these and what

skills are taught?

Currently, ChickTech: Career has done events in Portland. ChickTech

volunteers lead these workshops, and our volunteers are made of tech

professionals from around Portland. Our signature ChickTech: Career

event is called "Advancing the Careers of Women in Tech," and has been

held for two years at Puppet Labs in collaboration with the Technology

Association of Oregon. It attracted over 270 women and provides

one-on-one resume and interview advice with tech recruiters, along with

skill-building workshops like introduction to open source and

introduction to website development.

We've also run Arduino workshops for career level women. In those

workshops, women come from all backgrounds to learn about open hardware

and how to program it. For many of these women, this is their first

experience with programming, and especially programming hardware\!

Tell me about the workshop you'll be holding at OSCON 2014 this year.

How will it be similar or different than other events ChickTech has run

in the past?

[Open

HeARTware](http://www.oscon.com/oscon2014/public/schedule/detail/34678)

with ChickTech is similar to our "soft circuits" and "microcontrollers"

workshops that we have run with high school students. I participated in

one of these workshops with the high schoolers, because I had personally

never touched hardware before; I was a software person through and

through. You'll notice I said "was." After participating in our

workshop, I had the confidence to create with hardware and now I have so

many ideas\! One of which is creating light-up jupiter earrings like

Miss Frizzle in the Magic School Bus. Anyhow, we'll be enabling

attendees of this workshop to create a creature with felt, use

conductive thread and LEDs to learn about circuits, and to learn how to

reprogram the LEDs and speakers. We capitalize "ART" in this, because

these projects are quite creative, and ChickTech seeks to show how

creativity is used in all forms of technology.

[This

workshop](http://www.oscon.com/oscon2014/public/schedule/detail/34678)

will be different in that we will encourage men to be participants as

well. We encourage both novices and experienced people to sign up. We

want to teach novices (like I was) that hardware isn't scary, and that

you can create fun things with it. We want to give experienced folks the

tools necessary to take this workshop back to their communities and

start doing outreach to get more women in tech.

What other great companies out there does ChickTech partner with to get

girls involved in coding and hardware?

We partner with a bunch of folks\! This summer, we're working with

Girls, Inc. to run two 1-week long workshops about smartphone app

development. We partnered with Girl Scouts to run workshops about

programming for younger girls. HP, Intel, Garmin, Tektronix are just a

few of our sponsors. We also have strong support from Portland State

University and Oregon State University with our mission. As I briefly

mentioned earlier, we partner with organizations to get internships in

Portland for ChickTech attendees. Free Geek, Simple, HRAnswerLink, and

the City of Portland have provided internships for our girls and have

shown their commitment to diversity in tech.

Even with this support, we are completely volunteer-run. To make

ChickTech sustainable, we're looking for donations so we can have a few

paid employees. Our goal is to be nationwide by 2016.

Final thoughts?

ChickTech works, and things in tech need to change to include more

women.

3 ways to contribute to Firefox OS (interview with Benjamin Kerensa, Mozilla volunteer)

Bryan Behrenshausen (originally published July 2014)

Firefox OS, [Mozilla's open source mobile phone operating

system](http://www.mozilla.org/en-US/firefox/os/), needs developers. And

[Benjamin Kerensa](http://benjaminkerensa.com/) knows just where to find

them.

A Firefox OS evangelist and volunteer working as the platform's Early

Feedback Community Release Manager,

[Kerensa](http://twitter.com/bkerensa) will use his time on stage at

this year's OSCON to wage a recruitment effort. Along with [Alex

Lakatos](https://twitter.com/lakatos88), Kerensa will present [*Getting

Started Contributing to Firefox

OS*](http://www.oscon.com/oscon2014/public/schedule/detail/34588), an

introduction to building applications for the operating system.

Attendees will learn how Firefox OS embodies Mozilla's commitment to

open web standards like HTML, CSS, and Javascript.

In Kerensa's opinion, those attendees are some of the sharpest around.

"For me, OSCON is a really special event because very literally it is

perhaps the one place you can find a majority of the most brilliant

minds in open source all at one event," Kerensa [wrote

recently](http://benjaminkerensa.com/2014/06/05/speaking-oscon-2014).

His talk arrives not a moment too soon, as

[Flame](https://developer.mozilla.org/en-US/Firefox_OS/Developer_phone_guide/Flame),

Mozilla's flagship phone running Firefox OS, [is about to

drop](https://hacks.mozilla.org/2014/05/flame-firefox-os-developer-phone/)—a

golden opportunity for open source app developers.

Kerensa spoke with us about what makes Firefox OS so special and what

it's like to teach mobile phone carriers the open source way.

Explain the philosophy behind Firefox OS. How does the project express

Mozilla's mission more generally?

I think that, simply put, the philosophy behind Firefox OS is really

pushing the web forward by creating an amazing new platform where HTML5

is a first class citizen. This ties in really well with the mission

Mozilla is trying to achieve overall which comes back to pushing the

open web forward and really ensuring that it reaches as many people

around the globe as possible.

What differentiates Firefox OS from other mobile operating systems?

In my opinion, the biggest difference is that Firefox OS is a mission

driven platform, and Mozilla is interested in using it to bring real

change to users, while other platforms focus only on market share and

keeping both investors and partners happy.

I think other mobile operating systems are starting to pay attention to

what Mozilla is doing here with this idea of pushing forward an open

platform that anyone can hack on and that one of the largest developer

segments in the world can build apps for.

How does the work of developing an entire mobile OS differ from that of

developing a web browser?

Many of the development processes are the same since Mozilla operates as

an open source project, so we use often the same tools to get the job

done. I think the big difference is more at the level of delivering the

platform to users and working with carriers and hardware partners.

Many of these partners are used to the closed door processes of other

operating system (OS) makers. So it's obvious there are some battles to

be won in terms of getting partners to operate more openly and sometimes

there are lines that hardware partners are not yet ready to cross. But I

think the fact that Mozilla is working with these companies and having

conversations about how Mozilla makes software is definitely having a

lasting impact on the industry.

I think, with each iteration, the process of developing Firefox OS is

becoming easier since we're learning more from each release. We continue

to refine not only the product but also the experience for our users in

terms of polishing documentation and improving the support experience.

We owe a lot of thanks to our worldwide community of contributors that

enable us to scale as a project, to support these users, and deliver a

great experience.

What challenges do you face as you pursue Firefox OS's wider adoption?

I think as a community and project, the biggest challenge Mozilla faces

is scaling to ensure we have enough contributors right there helping

drive these projects forward. While we have always prided ourselves on

having one of the largest open source communities, we are continuing to

seek growth so we can scale to take on these new projects like Firefox

OS.

One thing any project of our size has is growing pains, and I think we

definitely are facing some of those and learning how to scale our

resources, support, and mentorship for our contributors who are helping

us to grow our community.

Your OSCON presentation promises to teach developers how they can start

contributing to Firefox OS. What's the quickest way someone can start

productively aiding the project right now?

There are so many ways that potential new contributors can get involved

in contributing. I want to point out three ways to contribute.

The first two are [Bugs Ahoy](http://www.joshmatthews.net/bugsahoy/) and

[What Can I Do for Mozilla](http://www.whatcanidoformozilla.org/). These

are tools created by the community. Then, there is one excellent

external project called [OpenHatch](https://openhatch.org/), which also

offers ways to contribute to Firefox OS.

Another way to get involved is for those interested to connect with

someone from one of their [local

communities](http://www.mozilla.org/en-US/contact/communities/). Mozilla

has reps and other community contributors in many countries so this

creates a great opportunity for new contributors to receive mentorship

and help along their path of becoming a more seasoned contributor.

Finally, I will highlight the most important way folks can contribute to

Firefox OS and that is [creating an app for the Firefox

OS](https://developer.mozilla.org/en-US/Apps/Quickstart/Build/Your_first_app)

and [submitting it to our

marketplace](https://developer.mozilla.org/en-US/Apps/Quickstart/Build/Your_first_app#App_Submission_and_Distribution).

Understanding the metrics behind open source projects (interview with Jesus M. Gonzalez-Barahona, Bitergia)

Jason Baker (originally published July 2014)

What do the numbers behind an open source project tell us about where it

is headed? That's the subject of [Jesus M.

Gonzalez-Barahona's](http://www.oscon.com/oscon2014/public/schedule/speaker/173116)

OSCON 2014

[talk](http://www.oscon.com/oscon2014/public/schedule/detail/37016)

later this month, where he looks at four open source cloud computing

projects—[OpenStack](http://opensource.com/resources/what-is-openstack),

CloudStack, Eucalyptus, and OpenNebula—and turns those numbers into a

meaningful analysis.

And Gonzalez-Barahona knows analytics. As co-founder of

[Bitergia](http://bitergia.com/), he is an expert in the quantitative

aspects of open source software projects. Bitergia's goal is to analyze

software development metrics and to help projects and communities put

these numbers to use by managing and improving their processes.

In this interview, Gonzalez-Barahona spoke with me about his

[OSCON 2014](http://www.oscon.com/oscon2014) talk, leveraging metrics,

trends, visualization, and more.

Without giving too much away, what will you be discussing at your OSCON

talk?

I will be presenting an analysis of the development communities around

four cloud-infrastructure projects: OpenStack, CloudStack, Eucalyptus

and OpenNebula. I will try to show how, even being wildly different in

many aspects, they (or at least some of them) also show some similar

trends.

Tell us a little bit about Bitergia. What is exciting about the

exclusive focus on open source projects?

We believe that for making rational decisions involving free and open

source software, you need to have the communities and development

communities in mind. And for that, you need the numbers and data that

characterize them. We are aimed at providing those data and helping you

to understand it. Open source software projects are the most exciting

area of the whole software development landscape. Helping to understand

them, and giving people tools to improve their knowledge about projects

and communities is keeping us really happy, and is a continuous source

of fun.

What metrics do you think are most important for open source projects to

be aware of? Do they differ from project to project?

They may differ from project to project, but some metrics are useful.

For example, those determining the age of developers by cohort (or

generation), which shows almost immediately the attraction and retention

of developers that a project is experiencing over time. With just a

quick browse you can determine if experienced people are still around,

if you're getting new blood, or if you have a high burn-out. Company

participation is also interesting, from a diversity point of view. And

of course, there are those metrics related to neutrality: how companies

and independent developers interact with each other, if some of them get

favored when they interact, or not. Activity metrics have been used for

many years, and those are also obviously interesting too. And now, we're

working a lot on performance metrics: how well is your bug fixing

working, or which bottlenecks you happen to have in your code review

process.

How might a project leverage metrics to inform decision making? What is

the best example you can think of showing how a project can improve from

what they have learned?

Just two examples:

After seeing their aging metrics, a certain company decided to invest in

a whole new policy to keep developers involved in their pet projects

because they realized they were losing too many of them for certain

cohorts, and they were really risking not having experienced developers

in one or two years.

With some open source foundations we have been working on very precisely

determining the participation and effort by developers affilated with

certain companies, because that was central to the negotiations between

these companies in deciding how to coordinate to support the project.

As you've worked with the metrics of various open source projects

through the years, what has stood out to you as surprising? Are there

any trends which seem to be emerging?

Something that you see once and again is how corporate support matters a

lot in some big projects. Granted, individual developers are important,

but a medium/large company can decide to double the number of developers

just by assigning experienced people to the project, thus boosting it

and generating a lot of momentum. But the process is not easy: you have

to carefully understand the dynamics, so that you don't cause burn-out

in volunteer developers or in others from other companies that are not

investing in the project at the same pace. Some may think that "helping

to accelerate" a project is just a matter of pouring money (or

developers) on it. On the contrary, we see it's almost an art: you have

to carefully track what's happening, and react very quickly to problems,

and maybe even slow down a bit as to not completely trash the effort.

But we've also seen how, when it works, it is really possible to come

from almost zero to hundreds or even thousands of developers in just a

few years, and still having a sustainable and healthy community.

How can visualizations help quickly provide a snapshot of data to a

project's community in a meaningful way?

There is so much data around that you need the right visualization to

find out the interesting information. The appropriate chart, or in some

cases just the appropriate number, can provide you much more insight

than a lot of data. I guess this is usual in big-data problems anyway.

Consider that analyzing large open source software projects is a matter

of analyzing millions of records (commits, changes in tickets, posts,

etc.). Either you have the right visualization, or you're lost.

Why it is important to have open source software tools to analyze open

source projects?

If you look around, several systems for analyzing and visualizing open

source software development are emerging. But unfortunately, most of

them are proprietary. And I say unfortunately because that's a pity for

the open source software community at large. It's not only that maybe

you don't have the resources to use those systems, or that you cannot

use them as you would like because you don't control them. Even when

they are free, and you basically have what you need, they may still not

be benefiting you as much as you could. You cannot innovate as you would

like. You cannot adapt the system as your needs evolve. You cannot plug

in at will with other data, or to other systems.

In short: you are not in control. This is not new, it is just the list

of reasons why we all prefer open source software and find it more

convenient and competitive. But it is for me of special concern that, in

a field that we need to better understand how our projects work, we

would have the only option of using proprietary systems or services.

Having all the software be open source, all the data public and

available (including intermediate data) is of paramount importance to

the distributed control and improvement of open source software during

the next years.

Asciidoctor coder writes less documentation (interview with Sarah White, OpenDevise)

Nicole C. Engard (originally published July 2014)

I've been working as the documentation manager for the Koha project for

six and a half years, so when I saw that Sarah White would be talking

about documentation at OSCON this year I knew I wanted a chance to

interview her.

Sarah will be giving a talk entitled [Writing Documentation that

Satisfies Your

Users](http://www.oscon.com/oscon2014/public/schedule/detail/34952).

Sarah believes in helping users succeed at solving their problems by

working on and helping others write documentation for open source

software, and I have to agree with her that one of the best parts of

working on an open source project (not just writing the documentation)

is getting to meet awesome people\!

Learn more about Sarah in my interview with her. And, make sure you stop

in to meet her at [OSCON 2014](http://www.oscon.com/oscon2014) if you're

there.

How did you get involved in open source?

I discovered open source software when I was in college and working with

ginormous CSV tables and satellite images. My classes and job required

collaboration between academic institutions and government agencies and

finding ways to integrate data from lots of sources. Open source

projects were absolutely integral to injecting the data into proprietary

programs and extending those programs' capabilities. For me, open source

has always provided customizable and flexible solutions. Since getting

involved in open source over 15 years ago, I've never encountered enough

compelling reasons to switch from a Linux OS or open source tools.

Why did you start OpenDevise?

I started OpenDevise with [Dan Allen](https://twitter.com/mojavelinux)

to help open source projects communicate clearly and effectively with

their user and development communities.

Have you written (or assisted in writing) documentation for any open

source projects?

I'm currently working on the documentation for the [Asciidoctor

project](http://asciidoctor.org/), and I've written educational content

for Arquillian and Fedora.

What have you learned working on open source documentation projects?

Documenting open source projects is time consuming. It's also fun and

invigorating. When you're creating documentation for a project you get

the pole position at the intersection between a project's maintainers

and users. You're helping users succeed at solving their problems while

collaborating with developers to improve the usability of the project.

And the best part of writing documentation is that I get to meet lots of

awesome people.

What are your favorite open source tools?

Fedora, it's clean, modern, fast, and just works no matter what external

devices I throw at it. [Arquillian](http://www.arquillian.org/), because

accurate testing is a must, and the community is one of the greatest.

Open source. Communities. Ever. Git, without version control I'd be

locked in a little padded cell. Blender and darktable, they make my life

beautiful and are uber powerful. And last, but certainly not least, are

my two favorite writing tools, gedit and Asciidoctor.

How did the Asciidoctor project come about?

In 2012, developers at GitHub began a Ruby implementation of AsciiDoc

and open sourced the project. [Matthew

McCullough](https://twitter.com/matthewmccull) had recommended AsciiDoc

to Dan and I earlier that year when we were evaluating markup languages

that played well with GitHub, GitHub Pages, and Ruby-based static

website generators. Dan and I helped finish the first compliant version

of Asciidoctor (0.1.0) in early 2013, and [Ryan

Waldron](https://github.com/erebor) handed the project over to us. Since

then, the Asciidoctor community has rapidly advanced and improved the

software's capabilities. The [Asciidoctor

Project](https://github.com/asciidoctor) now encompasses more than 30

repositories—you can output PDFs and slidedecks, view your content live,

in-browser, with the JavaScript implementation, and integrate with

Gradle and Maven. *Note: AsciiDoc.py was created by Stuart Rackham.*

When an open source project is lacking in documentation, where does one

start? How does one not only write the documentation, but get the

community to support and participate in their efforts?

When a project doesn't have documentation, I get together with the

project maintainers and find out what their vision and goals are for the

project. At the same time, I talk to the project's users and learn about

the project from their perspective. What are they using the project for?

Where and how are they using it? How does it help them? What problems

have they encountered while using (or trying to use) the project?

Additionally, I collect and analyze all the content I can find about the

project such as blog posts, presentations, screencasts, etc. I also dig

through the issue tracker, mailing list, discussion list, social media

mentions and any analytics I can get my hands on. Finally, I walk

through the code and try to use the project without any assistance.

What's the point of gathering all this information? To determine the

current and acute pains of the users. I use those pain points as a way

to focus the initial documentation, like README files and tutorials, so

it answers the most common needs and questions of the users.

Is there a method you find that works best for documenting software?

Groups working simultaneously, an individual documentation author, some

combination of the above?

I haven't found a silver bullet workflow for documenting software

because every project and its ecosystem is unique. But I do know how to

spark the process. You've got to get information about the software

flowing from the project maintainers and core contributors. They are an

untapped mine of information, but that information tends to get stuck in

their heads. A tactic I used for one successful project was to ask the

maintainers and developers to tell me one reason they loved their

software.

I discovered a plethora of essential benefits and features to document.

And a lot of that documentation now existed in the form of email

replies. I just had to structure it. This brings to mind one of my

favorite quotes on the subject.

Most people are OK with writing e-mails. They don't consider this

writing. There's no writer's block. Someone asks you a question, you

reply and type away.—Stoyan Stefanov

Which leads me to another key part of the process: make the contribution

workflow as basic as possible. Throw out as many rules, requirements,

and tools as you can. Nobody likes rules anyway. And they're just one

more thing that has to be written, edited, and maintained. I don't know

about you, but I'd rather spend my time writing code snippets that

chronicle the adventures of wolpertingers, aliens, and lost defensive

operations manuals.

Justin Miller on how Mapbox runs like an open source project (interview with Justin Miller, Mapbox)

Michael Harrison (originally published July 2014)

Justin Miller is an open source vet. He's the lead developer on the

[Mapbox](https://www.mapbox.com/) mobile team, but he's been through the

ranks. He worked on Linux.com, put in his time as a sysadmin, and then

made his way into mobile development.

Now Justin is working at optimizing Mapbox's rendering technology and

helping developers use its framework to join map data with all sorts of

great geolocation purposes. He's talking at

[OSCON 2014](http://www.oscon.com/oscon2014) later this month about [how

the Mapbox organization runs like an open source

project](http://www.oscon.com/oscon2014/public/schedule/speaker/104652),

and in the interview below, he discusses how that approach has made

Mapbox the mapping tool of choice for big players like Foursquare and

Pinterest.

What's a typical day at the office?

I work for Mapbox remotely, but we do have offices. We've got about 30

folks in Washington, D.C., 15 in San Francisco, and about a dozen of us

are remote in the U.S. and Canada, Europe, and South America. We operate

in a distributed manner, even when in person, largely using

communication on [GitHub, ](https://github.com/)through many open source

repositories, and some internal issues-only projects for operations,

strategy, customer collaboration, outreach, and the like.

As far as a typical workday, I do a bit of customer support on our own

site and on [Stack Overflow](http://stackoverflow.com/) for our mobile

tools, spend some time strategizing with teammates, but mostly I'm found

coding on mobile tools in Objective-C, C, and C++. I also do a fair

amount of reading to see what else is going on in the company, whether

that's peeking in on discussions—everything is open to everyone

internally—or reading "devlogs" by my teammates, which are basically

blog-caliber posts about recent work to the internal team. Very

frequently, we will take those devlogs and massage them into a public

blog post because they are so interesting and enlightening. A great

example by my teammate [Amit

Kapadia](https://www.mapbox.com/about/team/#amit-kapadia) is here:

[*Debanding the

world*](https://www.mapbox.com/blog/debanding-the-world/). We typically

blog at least once a day with a lot of substance about new efforts,

technical deep-dives, partnership launches, or other topics. So most

days, I'm also either writing or helping copyedit a blog post, too\!

Any new developments at Mapbox that you're excited about?

The most exciting thing to me right now is [Mapbox

GL](https://www.mapbox.com/blog/mapbox-gl), which we launched in early

June on iOS, Mac, and Linux, and which is coming soon to the web and

Android. It's a complete reimagining of our map rendering technology

that moves from pre-generated raster imagery of maps to a lighter-weight

vector format that is rendered on the client with OpenGL. We've always

focused on custom design, interaction control, and offline capabilities

with our mapping technology, but with Mapbox GL we are moving all of

that towards a goalpost of design at 60 frames per second. We are hoping

to unlock new possibilities, especially on mobile, with developers being

able to join mobile sensor inputs like heart rate monitors, pedometers,

and altimeters with custom map rendering to make the map a living,

breathing canvas for things like fitness apps and other geolocation

uses.

Your talk is about Mapbox and

[OpenStreetMap](http://www.openstreetmap.org/) and how companies are

switching to it over closed mapping systems. Can you give a few examples

of projects using the OpenStreetMap library?

Sure. Some of the projects that are now using—and contributing—to

OpenStreetMap by way of Mapbox include

[Foursquare](https://foursquare.com/) for the maps that you checkin on,

GitHub for the maps that help you visualize your projects' geo data

files, and [Pinterest for the maps that you pin places

to](http://blog.pinterest.com/post/67622502341/introducing-place-pins-for-the-explorer-in-all-of-us).

On the smaller scale, we've had apps that are all-in-one resources to

the [Tour de France](http://www.letour.com/), geolocation checkin games

where your local convenience store is full of zombies during the

apocalypse, and offline hiking maps for outdoor enthusiasts. We have a

[great showcase of beautiful uses of

OpenStreetMap](https://www.mapbox.com/showcase/) along with links to

their project pages for more info or to try them out.

Mapbox is "running a business like you would run an open source

project." Can you elaborate on what that means?

This is the meat of my talk, but basically, the organization is flat and

open. People join in on projects based on interest and available time,

or start their own projects based on an idea and the ability to convince

a couple coworkers that it's a worthwhile effort. If you have an idea

for improvement, talk is cheap and putting in the code to demonstrate

its potential is preferred. It's a very exciting way to choose direction

and participation and lets everyone engage based on their interests and

skill set. And nearly everything we write, anything that's easily

reusable by someone else, is completely open source.

What brought you to Mapbox and open mapping? What are some of your past

jobs?

I've been in open source for a long time. In the late 90s, I worked for

VA Linux Systems on the original Linux.com portal as the links manager,

helping people find Linux and open source resources. I also cofounded a

few startups where I was in charge of the web infrastructure, so I spent

a long while as a sysadmin on Linux, Apache, MySQL, and various mail

servers doing datacenter installs, including with Voxel.net, which was

an early SourceForge and PHP mirror. I moved from there into sysadmin

for non-profits and political campaigns, first as a consultant at

another small startup and later on my own, where I also did a lot of

freelance work on iOS and OS X as well as PHP development. When the

iPhone and smartphones hit in 2008, I was already half a decade into

open source Cocoa development, and that led me back to some contacts in

the non-profit and NGO space who were moving towards open data-based

maps and what eventually became Mapbox in 2010. I've been working

fulltime on Mapbox since early 2011, and I've seen the company grow from

about a dozen folks to over fifty today.

What's your preferred repository tool and why?

[Git](http://git-scm.com/) and [GitHub](https://github.com/). The

integration between code, discussion, inline commentary, and

notifications is superb. Mapbox runs on over 250 repositories for

everything from code to office equipment buys to travel planning to

business strategy.

How can our readers reach out to you or to Mapbox if they're interested

in learning more?

[We blog nearly every day](http://mapbox.com/blog), or you can reach

Mapbox on Twitter at [@Mapbox](https://twitter.com/Mapbox) and me at

[@incanus77](https://twitter.com/incanus77).

Is making your product free and open source crazy talk? (interview with Patrick McFadin, Datastax)

Scott Nesbitt (originally published July 2014)

Making money from open source. To many in the corporate world, that

seems like a contradiction in terms. *How are you supposed to make money

from something that you give away?* they ask. It can be done. A number

of companies, large and small, have done quite well in the open source

space over the years.

Just ask Patrick McFadin. He's the chief evangelist for [Apache

Cassandra](http://cassandra.apache.org/) at

[DataStax](http://www.datastax.com/), a company that's embraced the open

source way. He's also interviewed leaders at a number of successful open

source companies to gain insights into what makes a successful open

source business.

At this year's [Open Source

Convention](http://www.oscon.com/oscon2014/), held July 20–24 in

Portland, Oregon, McFadin will

[discuss](http://www.oscon.com/oscon2014/public/schedule/detail/34356)

how to make money with open source software (OSS) without deviating from

the core principles of open source. In McFadin's words:

I'm not bringing a magic formula, but I can offer advice to people

getting started. There are plenty of pitfalls going from an OSS project

to OSS company, and I've seen quite a few in my career.

I caught up with Patrick McFadin before his talk. Here's what he had to

tell me.

Why would a company or an entrepreneur want to go the open source route?

If you are looking to be competitive with software, closed source has

become an impediment. I say that especially about software in the

infrastructure, such as servers, tools, and libraries. The genie is out

of the bottle and running open source has gone from being the exception

to more of the rule. That an engineer can look at the source code, see

the product features being worked on, and file new issues or features

imparts a level of trust you can't get in other ways.

If your software is hiding behind marketing glossies and your

competition isn't, that's going to be tough sell. I feel somewhat

validated as I see many companies that were previously closed source

only, trying to open source software as a way to stay competitive.

What advantages does being an open source company offer?

The big advantage of open source over closed is adoption. When you

release your masterpiece to the world, you have no credibility or trust

built up. A user has to feel that downloading your software is a good

idea and carries a manageable risk. If you consider the [*free as in

beer*](http://en.wiktionary.org/wiki/free_as_in_beer) aspects of open

source, having fully functional software, with no limitations, free for

the downloading, eliminates a lot of perceived risk. If it doesn't work,

you have only lost some time understanding why.

I think of the way we have done business in a closed source world, which

wasn't that long ago. Organizations have built up a massive processes to

evaluate products only to make sure there is a fit for the needed use

case. I blame the rise of the Request For Proposal (RFP) on closed

source software. Endless lists of will your software do this or that. If

you are considering an OSS project, it will take less time to download

and try than to generate an RFP.

What, in your experience, are the most effective business models for an

open source business? Why?

The easy trap when starting a business plan around open source is to

become a services company. Consulting, training, and support are all

very important. But they're difficult to scale and as a company, have

low valuation.

If you are into software, you need to be a product company. This means

selling a license at a unit cost. The license can include support and

other services, but it can't be a service contract. This also means you

need to throw in some closed source software as part of the license.

It's a fine line of how you mix in closed and open and can be a hot

button issue for people supporting pure OSS. My criteria is never close

source anything that may degrade the open source version. Open it up,

and just be OK that some people will never buy your software.

Where do you find that the main opportunities in open source for

businesses lie?

Product companies can have much better outlook than service companies

just from a risk perspective. When the company's success is driven by

the knowledge of a few people, the loss of those people can be

devastating. I've seen entire teams in Silicon Valley move from one

company to another. If you are selling a product, it will still hurt but

you can hire more people and the product will live on. If the value is

in the mindshare, you may not recover.

That's not to say that product companies aren't risk free. You are still

faced with challenges in adoption and sales. Convincing organizations to

pay for something that is free to download means you have to be good at

showing value. Companies everywhere are adopting OSS but are not ready

to be deep experts in whatever software they are using. That's where the

value proposition lies and hitting that is critical to staying in

business.

What, if any, are the barriers to entry?

The first barrier is just understanding your market impact. Is there

enough demand to support a sustainable business?

The second is convincing others that there is a market. It's not an easy

telling people that part of your business plan is to make your main

product free and open. In the traditional business sense, that's just

crazy talk.

Fortunately, those barriers are lowering as OSS becomes more mainstream

and can be backed with real data. I was recently at a OSS business

conference where there were plenty of executives and investors in

attendance. Finding those like-minded individuals will help you be

successful. Creating a business around a project can help nurture it and

grow. A great line I heard once is:

I see a OSS project as competitive when somebody quits their day job and

tries to make a business around it.

What effect does picking a license have on a company's ability to make

money with open source?

License choice is a huge challenge for a project maintainer. What seems

like a quick or simple choice can turn into a long term liability. Like

any commercial license, the devil is in the fine print. You can go with

a very permissive license such as the [Apache

License](http://www.apache.org/licenses/LICENSE-2.0.html), or something

more restrictive such as the [GNU Public

License](http://www.gnu.org/copyleft/gpl.html) (GPL), or one of it's

variants like [AGPL](http://www.gnu.org/licenses/agpl-3.0.html).

If you plan on doing business with other businesses, you may want to

reconsider the use of GPL. Use of the GPL can be seen as a liability in

some corporations and can limit adoption. Whereas if you pick a very

permissive license like Apache, you lose a lot of control of your

project to anyone that wants to fork it and take it in a different

direction.

I prefer the more permissive licensing as I believe it offers the most

flexibility for adoption and therefore less commercial issues.

Can smaller shops (say, of one or two people) apply the advice you'll be

offering in your talk to their work?

Datastax, the company I work for, started with two guys looking to

support Apache Cassandra. That business has grown and done well.

There is a path from very small to large public company. It's way more

than just making a tarball of the source available from a website. Small

shops have a huge advantage in that regard when starting as OSS first. I

will have something for those companies looking to open source something

previously closed source as well. I can say that is probably the most

difficult since there is so much commercial inertia.

Open source to make caring for your health feel wonderful (interview with Juhan Sonin, Involution Studios)

Jen Wike (originally published July 2014)

Juhan Sonin wants to influence the world from protein, to policy, to

pixel. And, he believes the only way to do that is with open source

principles guiding the way.

Juhan is the Creative Director at [Involution

Studios](http://www.goinvo.com/), a design firm educating and empowering

people to feel wonderful by creating, developing, and licensing their

work for the public.

"We believe that any taxpayer funded effort should be made available, in

its entirety, to be reused, modified, and updated by any citizen or

business, hence the open source license. It should be a U.S. standard

practice for contracted work."

One of their works is [hGraph](http://hgraph.org/), a visual

representation of your health status, designed to help you alter

individual factors to improve it.

"Two companies are using hGraph in their clinical software. One offers

corporate clinic-as-a-services (less than 1,000-person corporate

campuses have clinics to serve employees), and an international pharmacy

is launching hGraph to drive their next generation clinician and patient

experience."

You describe the coming wave of healthcare to be based on sensors that

are tailored to our genome, non-invasive, and visible. This is where

design comes in. How do you design applications that "feel wonderful"?

I always picture the

[orgasmatron](http://en.wikipedia.org/wiki/Orgasmatron) from Woody

Allen's movie, Sleeper (1973), which was set in the year 2173. This

fictional device, a cylinder large enough to contain one or two people,

rapidly induced orgasms. A character would walk in, have a blast, and

walk out only seconds later. That's how I want healthcare delivered. It

needs to be fun. I want to think more about life and a lot less about

"health" and "security."

Think about the current line-up of consumer health devices. They require

massive mental and physical overhead to engage with, from remembering to

wear it to switching it to night-time mode to taking it off during

showers. They all suffer from the same issue: they're a branch of

devices called the "non-forgettables." By contrast, in-the-background,

automatic health sensing lets us focus our consciousness on the dream

life we want to live. The act of not thinking about my health and just

enjoying life is one major way to make a service feel wonderful.

And that's really what our work is all about: trying to create things

that feel truly wonderful.

How do open source principles guide your design process? How do they hep

you lead as Creative Director of Involution Studios?

Radical transparency is one of our studio's core beliefs.

For us, radical transparency means:

- open financials, where every staff has access to the corporate

financial data

- open decision making, involving everyone as equals and participation

in the studio's future via everyday design, business, and technical

decisions

- personal responsibility and integrity based on a public code of

ethics that drives our behavior... through speaking the truth to

ourselves and clients, learning together, and ultimately producing

real, helpful, and beautiful solutions for Spaceship Earth

Our open design mission is that our designs (patterns, code, scripts,

graphics, ideas, documents) will be available to any designer, to any

engineer, to any world citizen, to use and modify without restriction.

Our [code of ethics is

public](http://www.goinvo.com/about/code-of-ethics/) as well. We're not

perfect in every open source tenet, but we're making progress.

What are the Health Axioms?

I had a cholesterol check a day after eating a lot of salami—this is not

a joke, by the way—and it was like 350\! Now thankfully that was not my

real cholesterol but it got me interested in my health. So much of this

is digital and tracking-related, but so much of it is old-fashioned,

analog stuff. I think that is where a bigger impact can be made and was

how I wanted to craft a solution. [The Health

Axioms](http://healthaxioms.com/?view=grid) are about daily care and

attention to health in a distinctly physical format.

It's about sitting down at the dinner table on Sunday nights, hopefully

several nights a week, eating green peas, kale chips, and maybe fish or

tofu (instead of beef). It's how our mothers or grandmothers used to

remind us about saying please and thank you, going outside to play,

eating your veggies, and grabbing fresh herbs from the garden. But it's

also about reminding people about modern realities: put on your damn

rubber\! Don't be stupid out there.

That's where the Health Axioms fit in. It's [a deck of

cards](https://www.flickr.com/photos/juhansonin/8674571551/) that helps

people cut through the clutter and focus on clear, actionable advice

that will impact your health and how we interact within the healthcare

system. [Each

card](http://www.amazon.com/Health-Axioms-Card-Juhan-Sonin/dp/B00IBJQPX8)

has a single idea, one specific behavior we should concentrate on. The

Health Axioms are graphic examples, using a visual story to communicate

something important about your well-being.

Someone can buy and download or share, remix, and reuse Health Axioms.

How are people consuming Health Axioms most often?

Doctors are using them to discuss key issues with patients and handing

them out as daily reminders post-checkup. Three different clinical

practices are currently leveraging them as part of everyday exams. Wall

art is another outlet, where clinics are hanging large Health Axioms on

the walls to inform discussions and remind patients and doctors alike of

the 4-6 healthy behaviors we should engage in, such as:

- Get More Sleep

- Exercise is Medicine

- Food is Medicine

- Stop Smoking

- Examine Yourself

Next up is environmental graphics—imagine bigger-than-life-sized murals

on hospital walls. We're working on some great prototypes with a Boston

clinic right now.

Do you think they have reached more people because they are licensed as

Creative Commons? How has this affected your profits?

We make our ideas free. The Health Axioms are openly published on

Flickr, GitHub, various forums, and our

[website](http://www.goinvo.com/). The CCv3 Attribution license has

allowed our previous designs, photos, and documents to be viewed by

millions and used on thousands of sites, which in turn has driven more

business to the studio. We're just five months into the Health Axioms'

life. It usually takes a year of getting the word out and community

nurturing to begin to see results. However, we're already seeing the

engaged patient community lock onto them. Stanford Medicine X is

highlighting the content. National Public Radio [ran a story about

them](http://www.npr.org/blogs/health/2014/03/28/295734262/if-a-pictures-worth-1-000-words-could-it-help-you-floss).

It is wonderful to see but we have a long way to go.

Tell us about licensing the book, *Inspired EHRs: Designing for

Clinicians*, under Apache 2.0?

It is a [graphic whitepaper](http://inspiredehrs.org/) on designing user

interfaces for electronic health records (EHRs), written for anyone who

develops and implements health IT software, with a focus on those EHR

vendor teams that want to dive more into human factors and design. By

prototyping UI ideas, the interactive publication allows engineers and

designers to snarf the code via GitHub, inject their own clinical data,

and validate the ideas with their own teams and stakeholders, including

clinicians and patients.

Completed as of July 1, 2014, [the book](http://inspiredehrs.org/) is a

global resource paid for by the non-profit California Healthcare

Foundation, as well as the taxpayer funded Office of the National

Coordinator for Health IT (ONC). Our position is that any taxpayer

funded effort should be made available, in its entirety, to be reused,

modified, and updated by any citizen or business, hence the open source

license. It should be a U.S. standard practice for contracted work.

What is the big mission at Involution Studios? How did you choose open

source as a method for creating, developing, and licensing your work?

As designers, we want our fingers, hands, and eyes on all the moving

parts of a project, no matter how small or big. We want to influence the

world from protein, to policy, to pixels. That means expanding our

skills and knowledge to have impact at levels of science and society as

well as design and engineering. That degree of immersion into problem

solving and the holistic context of our clients enables us to make

extraordinary impact, at a level that transcends the important issues of

our clients but get into issues of meaning and the longer future.

And at some point in our careers, designers and engineers need to be

involved in policy… in the crafting, in the designing, or development of

guidelines or laws that drive how we as a people, operate together (or

not). Some efforts are grassroots, like the Inspired EHR Guide, which

starts with just a few people. This is attacking at the fringe, from the

outside in. Data standards and policy-making and advising mafia (like

HL7 or HIMSS) need good engineers and designers to participate. This is

not super-sexy work. While the pace of sculpting governance is

enormously slow (these kind of efforts take years and are often

frustrating experiences), the ultimate outcome and impact can be

long-lasting. And making this kind of change is why we're in business.

CloudBees programmer to give talk on how to develop a massively scalable HTTP server (interview with Garrett Smith, CloudBees)

Robin Muilwijk (originally published July 2014)

Garrett Smith from [CloudBees](http://www.cloudbees.com/) has been in

software engineering for over twenty years. He got introduced to the

Erlang programming language while evaluating CouchDB. Along with Visual

Basic, Visual C++, and Erlang, Garrett also learned about Python.

It was Python that introduced him to the world of open source, and from

there, as he learned more about open source and the dynamics of

community-based development, he never looked back.

Garrett will be talking at OSCON 2014 about [Erlang and building a

massively scalable web

server](http://www.oscon.com/oscon2014/public/schedule/detail/34881).

You don't need indepth knowledge of Erlang to get his talk about, what

he says are, "some pretty amazing things about building software in

general."

Please tell us a bit about your background and career. When did you

first cross paths with open source?

I've been writing software professionally for over twenty years and got

my start in the Microsoft Windows 3.x era of programming, so Visual

Basic and Visual C++ and even some VBA with Excel and Access. It sounds

pedestrian now, but it was very exciting at the time. Businesses were

spending money to build lots of software that ran on cheap PCs.

Programmers were in short supply and it was a wide open arena to learn

practical programming skills—and get money to buy beer.

When Java made its debut in 1994 out, I made a shift there. Apart from

its affinity with the Web, Java's cross platform support offered a path

out of the Microsoft hegemony. I bought the GoF "Design Patterns" and

made it my goal to use every pattern at least once in a project I was

leading. It was like a cocaine bender—an initial feeling of euphoria

followed by a crash that left me alone, bleeding in the gutter without

friends and sense that I had let the Universe down, morally.

My experience with Java was an exercise in problem creation, not problem

solving.

After seeing what I had done—and indeed what the Java culture

encourages—I tried my hand at Python. Python embraces multiple

platforms in a pragmatic and direct way and loses static typing. The

whitespace thing didn't bother me, and I enjoyed not having to think in

multiple layers of abstraction. I also met amazing people in the Python

community. This introduced me to "open source" (a loaded term then, less

so today) and the dynamic of community-based software development. I

never looked back.

I had been writing proprietary commercial software for about ten years

at that point and sensed that industry was decaying and would eventually

die. It might take a long time, but the future was not in selling

software licenses, it was somewhere else. The problem, as I saw it, was

that commercial software vendors were compelled to ship features to

drive sales. The features they built, in many cases, were drummed up,

internally, by product managers that needed something to announce and

market. The features often had nothing to do with real user pain and

ended up making the software worse. This cycle would continue, on and

on, until the software ended up bloated, awful to use, and expensive to

support.

When you're not charging licensing fees you can focus on what is needed

by users. Adoption is the key to success. Rather than shipping quarterly

releases with speculative features that you hope justify your customer's

subscription plans, you can ship routinely with small improvements that

are vetted with real world use. It's a radically different program and

so clearly superior, I don't see commercial software surviving it.

You'll continue to see pockets of niche, vertical applications that

remain successfully closed, but in time I think these will succumb to

open processes.

In an attempt to find a business model that could fit harmoniously into

the open source revolution, I shifted my career to enterprise software

services, but quickly returned to software product development. I'm

currently working in the "cloud" space, which is, in my estimation, a

way to add value to data center hosting services (fundamentally selling

electrical power, cooling, and backbone access). Whatever you call it

(IaaS, PaaS, SaaS, and so on) building products and the services that

run "out there somewhere" is a long running business model that

programmers can safely commit to.

Erlang goes a long way back to 1986 and the Ericsson Computer Science

Laboratory. When did you first learn about it? And what made you start

using it?

I first learned about Erlang when evaluating CouchDB for a project. I

had to install this language called "Erlang," and I had never heard of

it. Weird, how often does that happen? So I started playing around with

it and bought Joe Armstrong's "Programming Erlang" and was intrigued. I

had often discussed with friends the short coming of languages that

bolted distributed programming concepts on as an after thought. Here was

a language that had distribution built into the core\!

I kicked the tires, built the samples, and left it alone for a couple

years. I then ran into a problem that I just could not solve using the

in-house tools at hand—Python and Java. It was a fine grained monitoring

problem that involved tens of thousands of independent monitoring

states. The posix-thread based models were falling over in time with

mysterious hangs and process crashes. As a monitoring program, this had

to run reliably and had to deal with this hard concurrency problem. I

knew we had a long haul in getting things right with Python or Java, and

that opened up the option of using a new language like Erlang.

I took the hit to rewrite the program in Erlang. It took less time than

I expected, and it worked extremely well. That experience convinced me

of the merits of using Erlang—both as a superior model for handing

concurrency (independent processes with message passing)—and also,

surprisingly, as a functional language.

In using a functional language, I discovered that my programs were more

coherent (i.e. easier to understand and think about) and more stable

(fewer bugs). It prompted my gradual replacement of Python and Java for

backend, long running programs. They were easier to write and easier to

maintain.

This was a formative experience. Erlang is hands down my tool of choice

for writing long running, unattended programs. I'm also now a strong

proponent of functional languages in general.

What does a day at CloudBees look like for you?

We always look at open source software first at CloudBees. CloudBees is

the company behind Jenkins—a hugely successful open source project. We

realize that the quality of software reflects the quality of the

community behind it. Notice I said "community" and not "development

team." In open source, there are certainly core contributors, but open

source projects are living organisms that are continually shifting and

evolving. A great open source project, like Jenkins, attracts brilliant

contributors. Those contributors make the software better. Without

community a technology will die. We understand that and look for tools

and technology that are attracting and nurturing talented humans.

Internally, we use open source patterns of development. Individuals are

encouraged to contribute to internal projects, submit pull requests, and

in general solve problems directly by writing code. CloudBees loves

code—we are skeptical of designs and plans. Designs and plans are

great starters, but the programmers at CloudBees respect working code

more than anything. We're all very much sold on the open source way of

building software. There's no question that it results in higher

quality, better fitting technology. It goes faster as well.

Your session at OSCON will have a clear goal, to teach the audience to

develop a massively scalable HTTP server and learn all about Erlang.

What type of developers should attend?

People are interested in Erlang and the tutorial is a great way to see

how working software is built in the language. The tutorial slot is of

course too short to walk away with deep knowledge, but seeing how stuff

is built in a live context where you can ask questions and interact with

the teacher is invaluable. If you're curious about Erlang, come to this

session—it doesn't matter what your specialty is. The focus will be on

the Tao of Erlang—how it works and why it's important.

I absolutely love to teach. Really, I think everyone does—it's about

sharing the things you're passionate about and who doesn't do this

naturally? I try hard to keep the big picture front-and-center. E.g.

knowing what pattern matching is in Erlang is one thing. But knowing how

pattern matching changes the way you program and improves software

quality—that is powerful. If I can get some brain bits to flip—get some

"ahhh, that's really cool" moments with folks, I'll have my endorphin

release for the day.

Any final thoughts you would like to share? A sneak preview into your

talk?

Anyone attended either the session talk or the tutorial should be

prepared to learn some pretty amazing things about building software in

general. Erlang is a very unique language and the topics that

differentiate it are important to programmers. Process isolation, system

orientation, fault detection and recovery, distribution—these are

central to Erlang's identify. That's not true of any other programming

language that I'm aware of. Ironically, everyone's talking about these

topics \*outside\* the language tier—operating systems, VMs, containers,

cloud, etc. Erlang had this 20 years ago inside the language. Even if

you don't pick up Erlang, knowing how it approaches the programming

model will be useful in thinking about programs in other languages.

Of course, I hope people pick up Erlang as well.

DefCore brings a definition to OpenStack (interview with Rob Hirschfeld, Openstack)

Jason Baker (originally published July 2014)

What's in a name? Quite a bit, actually. To ensure compatibility between

products sharing the same name, it's important that users can expect a

core set of features to be consistent across different distributions.

This is especially true with large projects like OpenStack which are

made up of many interlocking components.

Rob Hirschfeld is working to solve this problem. He serves as co-chair

of the OpenStack DefCore committee, which is leading the effort to

create a firm definition to OpenStack by defining the capabilities,

code, and must-pass tests for all OpenStack products. Hirschfeld also

holds one of the elected community positions on the OpenStack Foundation

Board of Directors and is Distinguished Cloud Architect at Dell, where

he works with large-scale integrated cloud systems.

I asked Hirschfeld to share a little bit about OpenStack, the DefCore

effort, and his upcoming talk at OSCON in this interview.

Without giving away too much, what are you discussing at OSCON? What

drove the need for DefCore?

I'm going to walk through the impact of the OpenStack DefCore process in

real terms for users and operators. I'll talk about how the process

works and how we hope it will make OpenStack users' lives better. Our

goal is to take steps towards interoperability between clouds.

DefCore grew out of a need to answer hard and high stakes questions

around OpenStack. Questions like "is Swift required?" and "which parts

of OpenStack do I have to ship?" have very serious implications for the

OpenStack ecosystem.

It was impossible to reach consensus about these questions in regular

board meetings so DefCore stepped back to base principles. We've been

building up a process that helps us make decisions in a transparent way.

That's very important in an open source community because contributors

and users want ground rules for engagement.

It seems like there has been a lot of discussion over the OpenStack

listservs over what DefCore is and what it isn't. What's your

definition?

First, DefCore applies only to commercial uses of the OpenStack name.

There are different rules for the integrated code base and community

activity. That's the place of most confusion.

Basically, DefCore establishes the required minimum feature set for

OpenStack products.

The longer version includes that it's a board managed process that's

designed to be very transparent and objective. The long term objective

is to ensure that OpenStack clouds are interoperable in a measurable way

and that we also encourage our vendor ecosystem to keep participating in

upstream development and creation of tests.

A final important component of DefCore is that we are defending the

OpenStack brand. While we want a vibrant ecosystem of vendors, we must

first have a community that knows what OpenStack is and trusts that

companies using our brand comply with a meaningful baseline.

Are there other open source projects out there using "designated

sections" of code to define their product, or is this concept unique to

OpenStack? What lessons do you think can be learned from other projects'

control (or lack thereof) of what must be included to retain the use of

the project's name?

I'm not aware of other projects using those exact words. We picked up

'designated sections' because the community felt that 'plug-ins' and

'modules' were too limited and generic. I think the term can be

confusing, but it was the best we found.

If you consider designated sections to be plug-ins or modules, then

there are other projects with similar concepts. Many successful open

source projects (Eclipse, Linux, Samba) are functionally frameworks that

have very robust extensibility. These projects encourage people to use

their code base creatively and then give back some (not all) of their

lessons learned in the form of code contributes. If the scope returning

value to upstream is too broad then sharing back can become onerous and

forking ensues.

All projects must work to find the right balance between collaborative

areas (which have community overhead to join) and independent modules

(which allow small teams to move quickly). From that perspective, I

think the concept is very aligned with good engineering design

principles.

The key goal is to help the technical and vendor communities know where

it's safe to offer alternatives and where they are expected to work in

the upstream. In my opinion, designated sections foster innovation

because they allow people to try new ideas and to target specialized use

cases without having to fight about which parts get upstreamed.

What is it like to serve as a community elected OpenStack board member?

Are there interests you hope to serve that are difference from the

corporate board spots, or is that distinction even noticeable in

practice?

It's been like trying to row a dragon boat down class III rapids. There

are a lot of people with oars in the water but we're neither all rowing

together nor able to fight the current. I do think the community members

represent different interests than the sponsored seats but I also think

the TC/board seats are different too. Each board member brings a

distinct perspective based on their experience and interests. While

those perspectives are shaped by their employment, I'm very happy to say

that I do not see their corporate affiliation as a factor in their

actions or decisions. I can think of specific cases where I've seen the

opposite: board members have acted outside of their affiliation.

When you look back at how OpenStack has grown and developed over the

past four years, what has been your biggest surprise?

Honestly, I'm surprised about how many wheels we've had to re-invent. I

don't know if it's cultural or truly a need created by the size and

scope of the project, but it seems like we've had to (re)create things

that we could have leveraged.

What are you most excited about for the "K" release of OpenStack?

The addition of platform services like database as a Service, DNS as a

Service, Firewall as a Service. I think these IaaS "adjacent" services

are essential to completing the cloud infrastructure story.

Any final thoughts?

In DefCore, we've moved slowly and deliberately to ensure people have a

chance to participate. We've also pushed some problems into the future

so that we could resolve the central issues first. We need to community

to speak up (either for or against) in order for us to accelerate:

silence means we must pause for more input.

JĂ©rĂ´me Petazzoni on the breathtaking growth of Docker (interview with JĂ©rĂ´me Petazzoni, Docker)

Richard Morrell (originally published July 2014)

For those of us veterans in the open source software (OSS) community,

certain technologies come along in our lifetime that revolutionise how

we consume and manage our technology utilisation. During the early 2000s

the concept of high availiability (HA) and clustering allowed Linux to

really stack up in the datacentre.

In the same way that power management and virtualisation has allowed us

to get maximum engineering benefit from our server utilisation, the

problem of how to really solve first world problems in virtualisation

has remained prevalent. Docker's open sourcing in 2013 can really align

itself with these pivotal moments in the evolution of open

source—providing the extensible building blocks allowing us as

engineers and architects to extend distributed platforms like never

before. At the same time, managing and securing the underlying

technology to provide strength in depth while keeping in mind Dan

Walsh's mantra: *Containers do not contain*.

JĂ©rĂ´me Petazzoni is [talking at

OSCON 2014](http://www.oscon.com/oscon2014/public/schedule/speaker/151611),

and I had the opportunity to pose him some questions that I thought

would make interesting reading to the Opensource.com audience. Thanks to

JĂ©rĂ´me for taking the time to answer them, and I urge as many of you as

possible to [attend both

his](http://www.oscon.com/oscon2014/public/schedule/speaker/151611) and

[all the other keynotes and breakout

sessions](http://www.oscon.com/oscon2014/public/schedule/grid/public)

throughout OSCON.

The transition from DotCloud to Docker, and the breathtaking growth

curve to the release of 1.0, has seen Docker really demonstrate how you

can take good engineering, great code and deliver it. Openly. You've

worked hard with the release of 1.0 to get best practices in place to

ensure the code you put out was stable. Many projects go through a curve

where they move from a release tree to all of a sudden having to start

thinking about development methodologies and testing environments. How

have you found this in the short evolution that Docker has gone through

to get to 1.0? What have you learnt from this that will help you in

future releases?

You said it: "testing environments." We didn’t wait for Docker 1.0 to

have testing at the core of the development process\! The Docker Engine

had unit tests very early (as early as the 0.1.0 version in March

2013\!), and integration tests followed shortly. Those tests were

essential to the overall quality of the project. One important milestone

was the ability to run Docker within Docker, which simplified QA by

leveling the environment in which we test Docker. That was in Docker

0.6, in August 2013—almost one year ago. That shows how much we care

about testing and repeatability.

It doesn’t mean that the testing environment is perfect and won’t be

changed, ever. We want to expand the test matrix, so that every single

code change gets to run on multiple physical and virtual machines, on a

variety of host distributions, using all the combinations of runtime

options of the Docker Engine. That’s a significant endeavor, but a

necessary one. It fits to the evolutionary curve of Docker, going from

being “mostly Ubuntu only” to the ubiquitous platform that it became

today.

The announcement of the Repository Program \[note: it's "Official

Repository"\] really added massive value and a vote of confidence in how

partners can work with you how did you ensure when building the program

that contributing partners could engage and maintain quality control

over their contributions openly?

In order to participate in the Official Repository program, partners had

to host their source code in a public GitHub repository with the

associated Dockerfile. This allowed the Docker team to review ahead of

declaring it “Official.” In addition, participating partners must

provide instructions on how users can submit pull requests to

continuously improve the quality of the Official Repositories in the

program.

Over the last decade plus the community and the supporting companies

involved in open source have shown enterprise the value of adopting

Linux as a mainstream workload for applications and engineering

solutions. Docker is now a staple part of a seed change in how we take

next steps in service and application provision, probably more so than

at any time in recent history. Do you think that enough enterprises

understand the difference between the more lightweight and agile

container paradigm shift Docker brings compared to say heavyweight (and

expensive) technologies such as VMWare?

We definitely see an increasing number of enterprises understanding this

shift. Don’t get me wrong: they are not abandoning virtual machines to

replace them with containers. Both approaches (heavyweight and

lightweight virtualizations) are complementary. Embracing containers can

be a huge paradigm shift, but it doesn’t have to be. When discussing

Docker best practices with developers and operation teams, we often

present two approaches side by side: evolutionary and revolutionary. The

former requires minor changes to existing processes, yet provides

significant improvements; the latter, as the name implies, is more

drastic—both in implementation cost (for existing platforms) and

realized savings. Our users and customers appreciate both possibilities

and pick the one that suits best their requirements.

Docker has been very open and responsible with making sure that you

provide your users with complete transparency around security concepts

in the Docker architecture. Do you see an opportunity for having a

"Super Docker" container that allows you to enforce mandatory controls

in future releases reducing your threat attack surface and making the

most of the advantages of namespaces and also limiting (and auditing)

access to a control socket?

It’s a possibility. I personally believe that security is not a binary

process. There is no single feature that will grant you the absolute

protection. You have to deploy multiple additional layers: security

modules (like SELinux or AppArmor), kernel isolation features (like the

user namespace), UNIX common best practices (like “don’t run stuff as

root when it doesn’t need it”), and of course stay on top of the

security advisories for the libraries and frameworks that you are using

in your application components.

A new “Super Docker” container would probably not add a lot of security;

however, it could be used to group multiple related containers in the

same security context, if they need advanced or efficient ways to share

files or data.

One of the reasons I personally am excited about Docker is not just the

opportunities it affords us in our ability to offer stable services to

customers and to build the platforms we want but for taking Linux

securely to new areas we already "tickle" but where Solaris and AIX

still remain entrenched. The use of Linux in the classified and

government spaces is tied down Common Criteria to determine Evaluation

Assurance Levels. Docker is a game changer. It actually is one of the

most prominent and important adolescence changes in the Linux story.

Docker actually has an opportunity to tear up the rule book. Are

governments aware of the opportunity Docker gives them and if not is

this something that you're going to engage in the next steps of Docker

as an organisation?

The government market is very aware of Docker and has already reached

out to us. We have been in touch with government organizations,

including those within the intelligence community.

You personally come from a service provider background having helped

organisations with hosting and private cloud needs with your own company

Enix, you therefore know that many managed hosting companies now looking

to cloud already built clouds people don't want to consume services

on—because they had to have a cloud. Especially those companies have

already spent a lot of their budgets on proprietary technologies to help

them get to cloud. Do you see many of them now knocking on your door

realising that customers have a need to look to Docker?

Many channel partners recognize the portability benefits of Docker and

are actively developing practices based on Docker to help their

customers abstract away any technological dependencies on any specific

service provider.

Their past investments in private clouds are still relevant. If they

have deployed (for instance) OpenStack, they can already leverage on the

multiple integrations available between OpenStack and Docker, through

Nova, Heat, and soon Solum. Even if they built their own in-house IAAS

solution, they can still deploy Docker on top of that to use it for

themselves or offer it to their customers. Of course, native approaches

(i.e. selling “Docker-as-a-Service”) will require additional integration

work, but Docker doesn’t reduce the value of their platform: it

complements it and unlocks new revenue channels.

Moving forward what are your hopes and ambitions for Docker, not taking

into account your door being knocked by Solomon or Ben for new features

that always tend to throw the whole development environment?

The Docker platform offers a new approach for IT teams to build, ship,

and run distributed apps. Our ambition is to grow a great, sustainable

business that nurtures an active community ecosystem and provides great

solutions to customers who are moving to this new world of

micro-services.

On a mission to digitize and share the world’s visual history (interview with Thomas Smith, Project Gado)

Robin Muilwijk (originally published July 2014)

Thomas Smith will be [speaking at

OSCON 2014](http://www.oscon.com/oscon2014/public/schedule/detail/34595)

about [Project Gado](http://projectgado.org/).

Project Gado has created an open source robotic scanner that small

archives can use to digitize their photographic collections. Gado means

inheritance in the West African language Hausa, reflecting the project’s

historical preservation goal.

He has always wanted to be an inventor, and I spoke with him about what

it's like to work as a technology consultant in the San Francisco Bay

Area. In this interview, Thomas tells me more about how Project Gado

came to life, how the Gado community evolved, and how open source is

applied to everything.

Tell us about yourself and your background. Did you always want to

become an inventor?

Yes\! When I was little and people asked what I wanted to be when I grew

up, I always said "inventor." I got my first power drill at three and my

first soldering iron at ten, and I was always excited to take something

apart and see how it worked.

Today, I work as an entrepreneur and technology consultant in the San

Francisco Bay Area. I have degrees from Johns Hopkins University in two

different fields: Cognitive Science and Cultural Anthropology. I think

this background helps me a lot in my work. It allows me to think in a

formal way about technical challenges, but also to ground my solutions

in a practical understanding of communities and their needs.

Can you share some of the moment, and especially the motivation at that

time, of where the idea behind Project Gado came from? What sparked the

idea and what vision did you have?

I was working on an oral history project in East Baltimore. One day, I

tagged along with a colleague to visit the Afro American Newspapers in

Baltimore, looking for photos of the community I was studying. What I

found was an archive of 1.5 million historical photos dating back to

1892. In included never-published photos of everyone from Martin Luther

King Jr. to Eleanor Roosevelt.

The photos were largely hidden, because of the high cost of doing

hand-scanning, editing the scans, and adding metadata. In the whole

history of the paper, they had only digitized about 5,000 photos. I

realized that if I could use technology to radically automate

digitization and photo processing, millions of photos from organizations

like the Afro could be available to the historical record.

Today, we’ve scanned 125,000 images at the Afro. Using technologies

including our open source Gado 2 robot, we can digitize archives at

little to no upfront cost, and we can even help archives monetize their

photos through licensing.

It is inspiring to see how collaboration between Project Gado and other

organisations picked up. Can you tell us more about this process and the

evolution of the ecosystem around Project Gado?

Like many open source projects, we started out slowly. We released our

Gado 2 robot’s design files and code in 2011, and for about two years,

nothing happened. In 2013, we got more active and released a Gado 2 kit

to ten early adopters. That really set things in motion. We ended up

with user groups in California, Boston, and Baltimore, as well as our

flagship group in Finland.

One of the coolest things about open source is that once you seed the

community with an initial burst of activity, it reaches a point where

organic growth takes over. At this point, we don’t have to do much at

all the maintain the community.

Your talk at OSCON 2014 will show how Project Gado is an excellent

example of how an open source community can grow, collaborate, and

innovate together. Can you share with us, as a sneak preview to your

talk, what other open source projects can learn from Project Gado?

One of the major lessons is that you have to actively work to promote

your project and get the community started. You can’t assume that if you

have a great idea, people will show up to collaborate on it. We released

kits, but we also spoke at conferences, built a presence on the web, and

went through traditional media. Our Finland group originally read about

Project Gado in a newspaper article.

If you were to look three to five years ahead, what future do you see

for Project Gado? Any other final thoughts you would like to share?

Our mission is to digitize and share the world’s visual history. We’re

developed the technologies and relationships to make that possible. In

five years, I’d like to see us doing it at a much grander scale. There

are hundreds of millions of analog photographs out there, and I’d like

to make all of them available to the world.

The challenges of Open edX's large and complex codebase (interview with David Baumgold, edX)

Travis Kepley (originally published July 2014)

One of the fundamental tenets of the open source movement is the freely

available access of knowledge. There has been a growing scene of

educators, institutions, and organizations that see open access to

knowledge as not being limited to that of source code. For several

groups and universities this has been a focal point for the future of

worldwide education.

edX has been making a name for themselves by not only creating a

fantastic partnership of like-minded educators and institutions, but by

also open sourcing the platform on which edX is built.

As part of Opensource.com's [interview

series](http://opensource.com/business/14/7/speaker-interview-series-oscon-2014)

for OSCON 2014, I had a chance to speak with David Baumgold—a software

engineer at edX—about education, open source, and what lies in store for

OSCON 2014 attendees who are interested in open access to education. Be

sure to catch David and James' [talk on Open

edX](http://www.oscon.com/oscon2014/public/schedule/detail/34695) at

OSCON this year\!

I have been a big fan of the open course initiative since the beginning.

I notice that there are several foundations working with you including

one near to our hearts, The Linux Foundation. Can you give me a little

detail on how The Linux Foundation became involved with the edX project?

I don’t know the details of that partnership, but it seems to be

exclusively focused around providing the Linux course on

[edx.org](https://www.edx.org/), as opposed to the Linux Foundation

actually contributing any code to the [Open edX

](http://code.edx.org/)codebase. Since I’m focused on integrating code

contributions to our codebase, I don’t hear very much about the other

initiatives going on with the destination website. I *do* know that the

Linux course has been an extremely popular course, and lots of people

seem excited to learn more about software and open source\!

One of the big tenets of the open source philosophy is that information

should be available and not a guarded secret. This is clearly in line

with edX's goals. Can you give us some examples of where you are able to

pull from the open source community to expose knowledge to the masses?

The biggest success I can point to is our XBlock architecture.

[XBlock](https://xblock.readthedocs.org/en/latest/) is an extensible

system that allows a developer to define a component of a webpage—such

as a video player, a Javascript-powered interactive, a view of the

discussion forum, and so on—and reuse that component within a course and

across courses. We can’t possibly build all of the amazing learning

tools that every possible course will want to use, but XBlock allows

course authors to build their own tools and share them with others in

the learning community. Recently, a team at MIT that had no prior

contact with the edX development teams built their own XBlock, and

announced it on the community mailing list. The response from other

institutions and organizations in the community has been overwhelmingly

positive: there’s clearly a big demand for this sort of component that

edX didn’t even know about. But by providing the tools for the

community, open source allows people to scratch their own itch, and help

everyone learn more effectively.

What would you say is the biggest challenge for open access to

education? What are some small changes that could make a big difference

for edX to achieve its goals?

I think the biggest challenge is the matter of scale. We know how to

provide quality education in small groups of people, but the fundamental

challenge that edX is facing is how to provide quality education to

massive groups of people. The Internet provides tremendous opportunity

to connect people in unprecedented ways, and we’re finding that we can

all learn a lot from each other—the traditional lecture-style classroom

may end up taking a back seat to a more collaborative, networked

environment, where students learn more from their peers than from a

professor. We are just beginning to learn more about how education

happens on a massive scale, and the challenge—and the opportunity—is in

figuring out what works in this uncharted world.

Another core tenet of successful open projects is a strong and vocal

community. Can you tell us a bit about the edX community of users and

contributors?

We have a large community already, and they’re certainly vocal about

what they want and need\! I would say that the [Open

edX](https://github.com/edx/) community can be divided up into three

categories.

We have the community members that represent universities with courses

running on edx.org, or other universities that have strong ties to the

company. These community members often share information with each other

about how to use edX effectively, whether its answering questions,

sharing code contributions, or discussing theories of pedagogy.

Next, we have community members that have their own Open edX

installations, and want make it easier to keep their code up to date,

learn more about how to run their servers effectively, and hear what edX

is planning on building next. These people also report bugs in our

codebase, and sometimes help get them fixed as well.

Finally, there are the people who are just dipping their toes in with

the project: people who have trouble getting it installed, people who

want to know if it supports a certain feature, people who don’t know

where to begin. The Open edX codebase is large and complex, and there’s

only so much we can do to make it simpler\! For these people, we’re

trying to learn how to give them the best help we can, and how to help

them help each other. The more people there are using Open edX, the more

potential contributors there are making things better for everyone\!

In a [recent article in the Boston

Globe](http://www.bostonglobe.com/metro/2014/06/06/nearly-three-quarters-online-students-harvard-mit-are-from-outside-report-finds/BuZiQngGDHkpwEWwLEhMrM/story.html),

it was stated the the vast majority of students were from outside of the

US where edX is based. I would imagine this is pretty exciting given the

global participation you have from universities overseas. What are your

thoughts on the numbers from that article? Some numbers are really

surprising\!

Education is important around the world, so it makes sense that we would

have a large global community—after all, there are more people in the

world living outside of the US than there are living in the US. However,

it’s still amazing that we have this kind of reach\! Apparently, the US

education system is well-known around the world.

We’ve also seen impacts of this phenomenon in the Open edX open source

project. It turns out that our very first contributions from our open

source community were focused around internationalization, so that users

could use the website in their native language. Our community really

pushed us forward on this front—and as I said, they can be quite vocal

about what they need\! Our volunteer translators are translating more

and more of our platform into other languages, and there’s always lots

of celebrating at the edX offices when we release another language on

the edx.org destination website.

Is there anything in particular you'd like to share with us about your

talk for OSCON 2014 this year?

This is going to be my very first OSCON, and I’m very excited. I have no

idea what’s in store, but looking at the schedule of talks ([and the

sheer number of

them\!](http://www.oscon.com/oscon2014/public/schedule/grid/public)) I’m

sure it’s going to be an amazing conference.

Getting kids interested in programming, robotics, and engineering (interview with Arun Gupta, Red Hat)

Jason Baker (originally published July 2014)

It's easy to get kids interested in technology when the technology is

fun\! And the options out there for fun outlets for kids to learn is

growing every day. From building robots to programming games to building

your own electronics, the line between play and learning is steadily

blurring, and what's more, many of these platforms are built on open

source.

Meet [Arun

Gupta](http://www.oscon.com/oscon2014/public/schedule/speaker/143377).

He is Director of Developer Advocacy at Red Hat, where he focuses on

JBoss Middleware. But when Gupta's not at work, one of his passions is

the Devoxx4Kids program (he founded the United States chapter). Gupta

will be [speaking next

week](http://www.oscon.com/oscon2014/public/schedule/detail/33648) at

OSCON, where he'll share his experience with Devoxx4Kids and provide

some pointers for other parents wishing to get their kids involved.

In this interview, we learned what Devoxx4Kids is all about, the

children it reaches, and how you can get started with a chapter in your

area.

Tell us about Devoxx4Kids\! What is it, and how did you first get

involved?

Devoxx4Kids is a global organization with a goal to introduce school

kids to programming, robotics and engineering in a fun way. This is

achieved by organizing sessions where children can develop computer

games, program robots and also have an introduction to electronics.

Different chapters around the world have conducted workshops on Scratch,

Greenfoot, Alice, Minecraft modding, Raspberry Pi, Arduino, Python, NAO

robots, and a variety of other topics.

Devoxx is a professional developer conference based out of Europe. The

founders of the conference tried to teach their kids on what they do for

living and realized most of the material is in English, and not in their

local language. That led to the birth of Devoxx4Kids. I've been speaking

at the conference for a few years now and have delivered kids workshops

for over a year now. I know the organizers for a few years and so it was

very logical for me to build upon the effort and bring it to the USA.

I founded the US chapter and its a non-profit and 501(c)(3)

organization. A team helps me drive the Bay Area chapter and we've

conducted several workshops here. You can take a look at them at [our

website](http://www.meetup.com/Devoxx4Kids-BayArea/). You can find more

about the [organization](http://www.devoxx4kids.org/) there.

It sounds like there are projects for a wide range of ages. How young

does Devoxx4Kids reach?

Our workshops are targeted at kids from elementary to high school. Each

workshop provides the recommended age and we've seen these are generally

well honored by the attendees. Each workshop generally has a few

volunteers to help the attendees move along.

Each country has a different way to reach out to local attendees. Bay

Area chapter reaches out using

[Meetup](http://www.meetup.com/Devoxx4Kids-BayArea/). Belgium, Holland,

UK, and other countries use Eventbrite. And then these workshops are

promoted using usual social media channels.

Devoxx4Kids sessions have been delivered at different technology

conferences around the world as well. We also have a [free video

channe](http://www.parleys.com/channel/51b6ea81e4b0065193d63047)l that

allows us to expand our reach beyond the physical workshops.

So far, we've conducted about 120 workshops around the world with over

2,000 kids, and 30% of them are girls. We are very proud of that and

certainly seeing interests in opening chapters in different parts of

world. Take a look at [our website](http://www.devoxx4kids.org/join-us/)

if you are interested in opening a chapter.

We encourage parents to let the kids explore on their own as kids become

a lot more independent, apply their own wonderful mind to solve

problems, and also helps a lot in morale boosting. Parents do stay with

the kids in some workshops though and help kids catch up and move along

with rest of the attendees. This is especially true if younger kids are

participating in a workshop which has a lot more elder kids.

Most of the times it's been an enriching experience, both for the

attendees and us as the instructors. Every workshop is a new learning.

What is the most kid-created, exciting project you've encountered so

far?

Each workshop has a different level of excitement and the sparkle in

attendees' eyes is priceless. However there are certain workshops where

the excitement level is high. Minecraft modding definitely stands out

very prominently. We've seen some really creative mods from first time

Minecrafters in the modding class. That is very encouraging\!

Scratch workshop is a big hit for kids in elementary/middle school.

Their interaction with Leapmotion and ability to control sprites in

Scratch using their hands is very exciting for them.

You're giving another talk on Java EE 7. How can we bridge the gap so

that skills learned in Devoxx4Kids continue into future careers as

programmers or other related fields?

Our goal is to expose technology to kids at an early age and more

importantly in a fun way. Hopefully this will motivate them to stay

engaged as they grow. In some of our workshops, we also make parents

from hi-tech industry talk about their successful careers. This allows

the kids to connect the dots from what they are doing to where they can

be, hopefully. We only try to motivate kids and show them options when

they are raw. What they ultimately choose in their career, could be

completely different. But I'd like to say "If not us, then who. If not

now, then when."

Without giving too much away, what are some of the tips and best

practices you plan to discuss for workshop organizing?

If you are a newbie runner, then there are tons of questions. How much

distance should I run? What should I eat? How many days/week I should

train? What kind of cross-train ? Shoes, GPS, protein/fat/carbs ratio.

And the list is endless.

Similarly organizing workshop for the first time could be overwhelming

but we've delivered several of them all around the world. This

particular session will be answering questions like what does it take to

run a workshop, what topic would be relevant, how many attendees should

be invited, how to get volunteers, where is the training material,

t-shirts, swag, sponsors, and similar questions.

Is there anything else you'd like do add?

I'm personally thankful to O'Reilly for giving us an opportunity to talk

about Devoxx4Kids at OSCON. We are also organizing [OSCON Kids

Day](http://www.oscon.com/oscon2014/public/schedule/detail/35847) on the

Sunday before OSCON and so motivating more kids.

Do check out our [website](http://www.devoxx4kids.org/) and let us know

if you are interested in [opening a

chapter](http://www.devoxx4kids.org/join-us/).

About This Series

The Open Voices eBook series highlights ways open source tools and open

source values can change the world. Read more at

<http://opensource.com/resources/ebooks>.