I participated in ApacheCon EU 2016’s PGP Keysigning Party recently. Being a newbie about PGP keysigning, I made the mistake of not sending the keys back to the key server after signing as I was using a GUI tool called Seahorse. So today, after realizing this from searching my key on MIT’s PGP keyserver, I decided to do it again and send the keys this time using GnuPG. It is very easy to use GnuPG which is Gnu’s PGP implementation and it is well documented. I followed this article and am providing a list of simple commands below for those who already have setup GnuPG. Also, please not that you should not sign someone’s key until you verify someone.
Get the key $ gpg --keyserver pgp.mit.edu --recv-keys
Check the fingerprint $ gpg --fingerprint
Sign the key $ gpg --sign-key
Upload the key $ gpg --keyserver pgp.mit.edu --send-key
The morning began with keynotes from Sebastian Blanc and Bertrand Delacretaz. After that I attended Jean Fredric Clere’s talk on HTTP/2 and SSL/TLS. He also demonstrated the new protocol and the extensions.
After that was a talk on “If you build it, They won’t come” by Ruth Suehle. She talked about how UI/UX and documentation can make or break open source projects. She presented various examples of small and large open source projects. She also analyzed Apache Infra’s new website that will be coming up soon.
Next talk was from Rod Cope who talked about the need to build Offline First apps and presented some of the features of PouchDB/CouchDB. I got to learn about the new MongoDB and PouchDB and will try using that for my next projects.
And finally, after this we had the TACers meet hosted by Mellissa and Cristofer Dutz where we talked about our experiences at the event.
In the end, I would like to thank the Apache Software Foundation, the Linux Foundation and the Apache Community for having me at the conference. Hope to keep connected with everyone I met at the conference and get more involved with the ASF.
On day 4, I attended the Apache Way track as I felt that I should get to know more about the ASF and how Apache works. I got to learn a lot and feel that this has paved my way to further get involved with Apache projects. The first session was a panel discussion having panel members from the ASF. It was hosted by Nick Burch and the panelists were members of the ASF.
The “Apache Way” is the process by which Apache Software Foundation projects are managed. It has evolved over many years and has produced over 100 highly successful open source projects. It generally works well! But not always..
In this session, we’ll follow on from the theory, and look more on the practice of how it works. We’ll look more at cases when it has worked well! And when it has had problems. We’ll see more of the boundaries, the things that can be changed, and those that are fixed firm rules. We’ll see how businesses can get involved, and where project independence means they need to step back. Licensing, Trademarks, Decisions, Marketing, Infrastructure and more.
Then was the session by Wen Ming who talked about how they built a Tech Community in China using the Apache Way and discussed all the problems they faced while doing so. It was nice to hear how different places in the world face the problem of accepting that Open Source is as important as normal work. Most asian countries face this problem that employers expect their employees to work 24/7 on the work done at the company and work done beyond that is considered to be somewhat illegal.
The morning began with me waking up a little later. The previous night, we had the Attendee Reception. The BarCamp began around 10 AM. A barcamp is an ‘unconference’ with no set schedule, facilitated by those involved in various Apache projects. It was organized by Jean Fredric Clere and Sharon Foga.
You can learn more about the BarCamp at:
Since, most people don’t know what a barcamp is and might be confused I’d suggest reading up about it on https://en.wikipedia.org/wiki/BarCamp
Here are his slides. I would recommend them to anyone who is trying to build Open Communities.
After a small break, it was time for the day’s keynotes and ApacheCon to officially begin. Rich Bowen gave the Opening Remarks and welcomed all the attendees and the speakers to the next set of days of ApacheCon. Jim Jagielski gave the State of the Feather speech. I was amazed to learn so much more about the Apache Software Foundation and it’s resolve to put Community First before Code.
Then we had the Lightning Talks. They were amazing, one of the most memorable lightning talk was given by someone who had made a drinking game out of code reviews. Another memorable one was given by Shane Curcuru about how he got involved with Apache and how others should do the same.
The second day of ApacheCon BigData was also successful and amazing. It was a long day that started with the Keynotes by Mayank Bansal from Uber who explained Uber’s big data stack and how they scaled up.
The next keynote was by Sean Owen from Cloudera who explained how Apache is more than just another Github where people just dump their code. It’s a place for building the community. It was also nice to hear his shoutout to Apache Allura which he talked about to explain the diversity and the reach of the projects. He said how we usually just think of ASF as a place for the HTTPd and Big Data projects but it’s more than that and how there are projects as big as these projects like Apache Allura.
Then, I attended the session on Distributed and Native Machine Learning using Apache Mahout by Suneel Marthi from Redhat. The talk was Math Intensive and demonstrated how easy it is for Data Scientists to forget about the implementation of the stack below and just write the code for their Data projects in their favorite language. He demonstrated how easy it is with Apache Mahout-Samsara to do distributed Linear Algebra with an example of the EigenFaces classification problem.
Another interesting talk was given by Clemens Valiente from the Trivago Development team who explained his company’s big data stack and how they moved from simple Java platform to the Big Data stack that reduced their query time from 5 seconds to less than a second.
Julien Herzen presented Meerkat, which is a system built at Swisscom to do real-time anomaly detection on time series. Meerkat uses a combination of machine learning and big data technologies in order to trigger alerts in case of problems in Swisscom network.
It was fun to volunteer for today’s sessions at Apache Big Data 2016. My responsibilites included helping the speakers setup their laptops and introduce them to the audience, keep time and remind them when time was running short, and finally helping out with the Q&A at the end. I also liveblogged the sessions via twitter and interacted with the speakers as most of the sessions were of my choice as the volunteer team had a shared spreadsheet on which we could choose our sessions in a first come first serve manner.
The day began with the Keynote sessions at 9:30. Rich Bowen started the conference with his opening remarks followed by Stephan Ewen and Alan Gates who gave their talks.
A small coffee break followed the keynote in which the TAC team met and all us of started to prepare for the breakout sessions that run in parallel in different conference rooms.
The first session I attended was about Apache Gearpump. It’s an interesting project and is a realtime big data streaming engine.
The second session was interesting and I got to learn a lot more about Apache Solr. I learnt about Faceting which was new to me and I feel would be very useful for projects that use Solr. Even Apache Allura might be able to use it somewhere, but for this I will have to think where it can be used.
The next session was one which I was very interested in as I had worked on a similar project called Blip. I talked to the presenter, Thomas Burgess and told him about it as well. Their company, indoo.rs has works on the same thing of providing indoor positioning services. They have even deployed it in San Francisco Airport. Right now they are researching about the same and trying to find new ways using Big Data analytics to reduce the time it takes to deploy these solutions and wish to create ways to make the process automated. They are also looking into using some seed data points and then extrapolating them using crowdsourcing. Hopefully, I will try to get in touch with them and discuss the research that is going on at our University regarding the same.
Next, was a talk by Tim Park from Microsoft. I did not anticipate that this would turn out to be one of my favorite talks of today’s sessions since I had not read much about what was going to be talked about in this talk. It turned out to be great for me as I got to learn a lot and was able to connect the dots.
Since, the previous session ended a bit early, I was able to goto the central banquet area where I was able to socilize with a few Apache people like Jean-Frederic Clere and talked about the Bar Camp. Also, talked to some of the speakers and atendees that I had interacted with during the events.
Afterwards, the last session I attended was about Druid and Apache Hive. It was also pretty good, but I did not have much knowledge about what was being covered. Although, looking at the demonstration I was able to figure out what had been discussed earlier and the benefits.
Toured Seville today thanks to https://www.feelthecitytours.com/en/tours/sevilla/ . They are an amazing company with a great tour of this beautiful city. Got to learn a lot about the city along with a bunch of great stories about the city. One of them was the story of Carmen (http://www.geocities.jp/wakaru_opera/englishcarmen.html) which is a pretty nice story.
The TAC meeting was pretty nice and was followed by the TAC team dinner at a Tapas restaurant.
Reached the Melia Sevilla, Spain tonight. Pretty excited about ApacheCon. Am planning on finalizing the presentations about Allura by tomorrow night and discuss them with @brondsem on Monday.
It was a long trip to Sevilla via Madrid and London with @gauravsaini03, he will be speaking about Apache OFBiz which is a complete enterprise solution.
Met Dr Paul King(@paulk_asert), who is working on Apache Groovy and has worked on the “Groovy in Action” book by Manning publications. He has 3 back to back talks lined up at ApacheCon on Thursday about Groovy.
I’m excited about attending ApacheCon Big Data and ApacheCon 16 at Seville, Spain. Only a few days are left. Talked with my mentor at Apache Allura, @brondsem today about his experiences at ApacheCon NA Denver 14 and what all I could do at ACEU 16. We’re hoping for improving the exposure of Apache Allura within the Apache community and trying to get some projects in the incubator to try Allura. I’ll try to present a lightning talk and host a demo session for Allura at the conference. Also, my work on the importer begins again tomorrow and hopefully it will be done before reaching Spain. Will share updates on this blog.