As anyone who has ever tried to recruit an Automation Engineer will know, good Automation Engineers are very hard to find. A lot of the time you will find that people who were poor at programming just fall in to this role, however weak engineers don’t necessarily make great Automation Engineers. This may sound simplistic, but the core responsibility of an Automation Engineer is to test your product in an Automated fashion. The best manual testers have a way of looking at a product that engineers don’t always possess. Great manual testers will test predominantly in 2 ways:

  • How the user is most likely to use the product in an end to end fashion
  • How they can break the product in ways engineers never thought the user would

The second point above is usually the fun part for a Tester, but this method can discover really important bugs such as unauthorized access to secure locations. The key thing you want when hiring an Automation Engineer is to make sure that they think like a Tester and are able to have sufficient programming knowledge to automate the tests.

So here at SecondMarket we happened to have 2 amazing Testers. They possessed that unique skill of being able to discover almost all critical defects prior to the product going to production. This is no mean feat considering that we go to production every day. In our team, engineers write unit, integration tests and some UI tests, but a significant amount of our front end tests remained manual. This obviously needed to change, but after 6 months of interviewing candidates we were not able to find an Automation Engineer to lead the transition to full automation.

So we only had one option. Our manual testers had to Automate. But  they weren’t able to program or had only limited exposure to programming. This is where hiring great people always results in the best outcome. Our two awesome manual testers took it upon themselves to learn how to program. They started off a year ago by taking the CS 101 Stanford coursera course. Then they purchased the book ‘Learn to Program’ which teaches you programming using the Ruby programming languages. After that they started looking at RSpec and Cabybara. One day, they downloaded RubyMine & our codebase and wrote a test for our login page. They continued pushing to learn more. They started with simple tests and then discovered tests can be structured better using test patterns such as Page Objects. The engineers were eager to help their fellow team mates to learn more and spent numerous hours guiding them and helping them understand the necessary fundamentals. 

Today, they are writing fully functional automated tests and have become part of our automated build and release process for our new product we just released. The transition has been amazing to watch. It wasn’t something that happened overnight, it took a year but the results are very satisfying. So, my advice, save yourself some pain and turn your best manual testers into Automation Engineers.

- Michael


I am a fan of Scrum. So much so that we use it across all our product development teams. Well almost. We use a form of it. Actually, technically according to Scrum, we don’t use Scrum at all, we use “Scrum  But”. Therefore according to the industry we are scum, not scrum:). I jest a little, but what does ‘Scrum But’ mean? On the scrum.org website it is defined as follows:

ScrumButs are reasons why teams can’t take full advantage of Scrum to solve their problems and realize the full benefits of product development using Scrum. Every Scrum role, rule, and timebox is designed to provide the desired benefits and address predictable recurring problems. ScrumButs mean that Scrum has exposed a dysfunction that is contributing to the problem, but is too hard to fix. A ScrumBut retains the problem while modifying Scrum to make it invisible so that the dysfunction is no longer a thorn in the side of the team.

I have mostly heard this term replayed to me by some Scrum Coaches in response to a line like - “We use scrum for the most part, but…” Ah, Ah, Ah comes the interjection, “You are using Scrum but, You are not following Scrum”. At this point, I am supposed to be ashamed, I am supposed to feel stupid, but funnily enough, that’s not how I feel.

Some aspects of Scrum are very core to the way we build software products. Planning meetings, daily scrums and the definition of done are non negotiable. These are critical and help use the resources of the team as efficiently as possible. But other things, well, they don’t really matter to us.

So, what do we do wrong? Well firstly we don’t force the idea that everything in an iteration is done at the end of the iteration. Planning, even if you are using story points, which I do believe help make estimation better, is by its very nature an inaccurate science. We write as good a story as we possibly can, we use story points and we try to estimate as accurately as we can, but we will have situations where there are unknown issues, or an aspect of a story is incomplete. We strive to have better stories and even try to have a good idea of the technical solution prior to planning, but it is still difficult to estimate accurately prior to starting development. So with this in mind, we don’t go around calling our iterations a failure if something is not done at the end of the iteration. And more importantly, since we don’t consider it a failure, we don’t really try to fix it. Why not? Well, we pretty much go to production every day, so missing a date is not such a big deal if we can just push it live the next day. Now, we don’t just sit around and not care about dates. Obviously if business critical functionality needs to ship, we do everything we need to do to ship it, but this has nothing to do with Scrum, this has to do with the culture of our organization, which is driven by delivering high quality product to our end users. What we do consider critical, is that everything that is shipped to production is done, using the strong scrum definition of the word done. 

People might say that teams are not reaching their full velocity? I firmly believe that great engineers build great software faster not a framework such as Scrum. Scrum absolutely at a certain level help engineers stay focused and out of unnecessary meetings etc., but I have seen the “velocity” of SDLC teams that are working together over a period of months dramatically increase over time, purely based on the fact that they had great engineers on the team that just wanted to build great product. When first putting a team together, it takes time for the team to gel - once a team has gelled, productivity naturally increases whether you are using Scrum or not. In of itself, having everything done at the end of an iteration does not increase velocity. So given this view point, we don’t really value velocity, to us, it is as unscientific as counting lines of code.

What else don’t we do? Well we hold retrospectives every other iteration. Our iterations are every 2 weeks and people were finding them of sufficient frequency if we did them every other iteration. Sometimes managers attend scrum meetings - they can actually help, and enable the team to deliver product faster. But we certainly don’t allow the scrum to take longer than 15 minutes. 

But my main objection today is the over use of the phrase “scrum but”. It insinuates that the founding fathers of scrum were able to anticipate every use case of scrum and every environment that scrum is used in and every edge case. It insinuates that people can’t think for themselves, that other people know better. I do give the benefit of doubt that the term “scrum but” was developed with good intentions, primarily to keep new adopters on the straight and narrow. But I do have a fundamental problem with the concept of Infallibility - the idea that an organization knows everything and other peoples opinions don’t matter. It also contradicts one of the core objectives of scrum, which is to have self empowered teams and the ability for the team to make decisions for themselves. 

- Michael


Our engineering and product teams are huge fans of Atlassian JIRA and its companion Wiki software, Confluence. Recently we upgraded from JIRA 4.3.4 to JIRA 5.2 and rebuilt the server from scratch, finally using Chef to provision it as much as possible.

Since we couldn’t find any good community cookbooks, we decided to write our own. (Opscode’s JIRA cookbook is woefully out of date and only runs on Ubuntu.) We’ve published them on our GitHub cookbooks organization under the names crowd, confluence and jira. For managing databases and users, we used the database cookbook from Opscode, and the excellent providers within that cookbook.

There are still a few improvements we would like to make to these cookbooks, chief among them being a way to manage/edit XML configuration files. For instance, since we deploy JIRA at the /jira context (with an Apache proxy in front of it for SSL termination), we’d like to be able to edit Tomcat’s server.xml file to make this so, without breaking future changes from upstream. We tried using xmlstarlet but ran into idempotency issues. Suggestions (and pull requests) are welcome.

Also of note: it’s the first time we’ve developed cookbooks using Riot Games’ Berkshelf, which we highly recommend for unit testing. Berkshelf 1.0.0 has just been released, and it’s awesome — certainly more mature than a point-zero release would have you believe. We’re still learning a lot about Chef workflow, Git organization, and testing methodologies, and we’d love to read your comments on these topics.

Happy cookbook hacking!

- Julian


At SecondMarket, we run our own DNS servers in Amazon EC2 rather than using Route 53 for the following reasons:

  • We put all our servers in an internal top-level-domain and we don’t want to publish that fake TLD over the Internet.
  • BIND is easier to automate using Chef. In particular, we wanted to periodically iterate over all the nodes registered in Chef, create DNS zone files out of those nodes and reload BIND, thereby keeping our zone files in sync with what servers are actually deployed. It’s easier to do this by creating zone files rather than posting changes to an API. (Once our BIND cookbook is in a more mature state, we’ll publish it to our GitHub.)
  • We also wanted to avoid vendor lock-in. Route 53 is great for the small number of public DNS records we have, but in general we tend to stay away from proprietary Amazon solutions that tie us heavily to their platform. For example, we run our own ActiveMQ broker network rather than using Amazon Simple Queue Service (SQS), which charges per request and GB transferred. This keeps our costs predictable and gives us finer-grained control over the queues using an interface that engineers can understand.

From the beginning, we had some requirements for how to configure the zone files. Specifically, for EC2 hosts, we wanted the hostname records to only be CNAMEs to the canonical Amazon hostname, ec2-xy-za-bc-de.compute-1.amazonaws.com. As EC2 admins know, that canonical hostname resolves to the internal (10.x.x.x) IP of the box when queried within Amazon’s network, and the public IP of the box when queried externally, for example, from our New York office. By putting only the canonical Amazon hostname in our DNS zones, we ensure this can happen transparently.

There are a couple of configuration parameters we needed to set within BIND to make this work.

Read More


SecondMarket prides itself on having a great intern program, but we are probably biased. So, what is it like to be an engineering intern at SecondMarket? We think the best way to find out is if you ask the interns themselves:
Rotem David (Columbia)
My experience at SecondMarket was awesome. Everyone was very welcoming and took me seriously even though I was just an intern. On my first day, when Michael Lysaght introduced me to the tech team, I noticed the vibrant atmosphere and immediately felt eager to contribute. At first, to ease my landing, I was assigned a project that would challenge me but didn’t require much work on the main codebase. More precisely, I had to create an open-source Maven Plugin to document the SecondMarket services API.  I learned a lot about reflection, SpringMVC, JavaScript, and CSS. This was a great learning experience for me and prepared me for the next step of my internship.   
After getting used to the workflow, I was given an opportunity to work on the codebase. I joined the Markets-manager team and received my second project. My assignment was to create a cap-table import feature for holders and buyers so that the business side of the company would no longer have to manually input each line of a cap-table. I was surprised by how approachable everyone on my team was. Even though they were constantly on a tight schedule, they always found the time to address any issue I had with the project.   This experience mainly taught me how to work in a large engineering team. Moreover, in order to validate the information in the cap-table, I became very familiar with the business side of SecondMarket.
Colleen Carroll (Princeton)
Coming to work at SecondMarket was exciting, I had never worked in a real production environment before or with so many other people. From the beginning I learned how such a large task is managed. I wasn’t simply assigned tasks to complete,  I was a part of product meetings and technology meetings. It was exciting to see how decisions are made in a product that is still growing.
The work environment is great, mostly because of the people. Everyone is friendly and helpful. They definitely make sure that interns have interesting work and are enjoying their time while they’re here. It’s been a great summer.
Sam Stern (U. Penn)
Unlike the other interns at SecondMarket, I never actually applied for my job.  I was placed at SecondMarket through a startup internship program called HackNY. When I first got a phone call from Marv Muller, I didn’t know anything about SecondMarket besides what I could find on the company homepage. When I came in for my first day in May, I had no idea what to expect. I was immediately struck by how relaxed the work environment was and how much freedom I had as an intern. My first project was to build a reporting app to generate and schedule data reports. When I got my project assignment, Michael Lysaght asked if I’d like to complete it in Scala or Java.   When I responded that I had never written a line of Scala in my life, I was told that I could take the time to learn the language before starting to work. Right away, I loved working at SecondMarket and it was clear that I would learn a lot this summer. I didn’t have any deadline to meet and nobody was worried that I wouldn’t get the project done, I was allowed to take my time learning what I needed to know. By the end of my first project I had learned so many things about Scala, Play Framework, MongoDB, and Postgres that I would never have learned if I had been told to just get it done quickly in the language I know best.
About halfway into my internship I had completed two “isolated” projects and was ready to begin working on the main codebase. I was excited to make the move; working in a production environment was my main motivation for seeking a CS internship this summer.   When I first opened up the main project I would be working on, I was floored by the thousands of lines of code I would need to read and understand. I spent most of the next week pestering other engineers with questions and I found that everyone was willing to take time out of their own work to help me get up to speed. Over the next four weeks I learned a ton about what it takes to develop an application of this size and how 30+ engineers can work together on a codebase. These are things I would never have been able to learn without being in this environment and having very helpful coworkers.
Summary
Although we learned a lot, our summers at SecondMarket were about a lot more than just programming. The company has an awesome culture that made sure we never felt bored or anxious to go home. Every Friday we got to look forward to a delicious “Foodie Friday” catered lunch and maybe some Xbox as people got ready for the weekend. We also took a Tech Team pizza tour of NYC which was both delicious and hilarious. Just last week we had our annual Summer Party at an awesome venue by the water. The whole office has a genuinely fun atmosphere that made us excited to come to work every morning.

SecondMarket prides itself on having a great intern program, but we are probably biased. So, what is it like to be an engineering intern at SecondMarket? We think the best way to find out is if you ask the interns themselves:

Rotem David (Columbia)

My experience at SecondMarket was awesome. Everyone was very welcoming and took me seriously even though I was just an intern. On my first day, when Michael Lysaght introduced me to the tech team, I noticed the vibrant atmosphere and immediately felt eager to contribute. At first, to ease my landing, I was assigned a project that would challenge me but didn’t require much work on the main codebase. More precisely, I had to create an open-source Maven Plugin to document the SecondMarket services API.  I learned a lot about reflection, SpringMVC, JavaScript, and CSS. This was a great learning experience for me and prepared me for the next step of my internship.   

After getting used to the workflow, I was given an opportunity to work on the codebase. I joined the Markets-manager team and received my second project. My assignment was to create a cap-table import feature for holders and buyers so that the business side of the company would no longer have to manually input each line of a cap-table. I was surprised by how approachable everyone on my team was. Even though they were constantly on a tight schedule, they always found the time to address any issue I had with the project.   This experience mainly taught me how to work in a large engineering team. Moreover, in order to validate the information in the cap-table, I became very familiar with the business side of SecondMarket.

Colleen Carroll (Princeton)

Coming to work at SecondMarket was exciting, I had never worked in a real production environment before or with so many other people. From the beginning I learned how such a large task is managed. I wasn’t simply assigned tasks to complete,  I was a part of product meetings and technology meetings. It was exciting to see how decisions are made in a product that is still growing.

The work environment is great, mostly because of the people. Everyone is friendly and helpful. They definitely make sure that interns have interesting work and are enjoying their time while they’re here. It’s been a great summer.

Sam Stern (U. Penn)

Unlike the other interns at SecondMarket, I never actually applied for my job.  I was placed at SecondMarket through a startup internship program called HackNY. When I first got a phone call from Marv Muller, I didn’t know anything about SecondMarket besides what I could find on the company homepage. When I came in for my first day in May, I had no idea what to expect. I was immediately struck by how relaxed the work environment was and how much freedom I had as an intern. My first project was to build a reporting app to generate and schedule data reports. When I got my project assignment, Michael Lysaght asked if I’d like to complete it in Scala or Java.   When I responded that I had never written a line of Scala in my life, I was told that I could take the time to learn the language before starting to work. Right away, I loved working at SecondMarket and it was clear that I would learn a lot this summer. I didn’t have any deadline to meet and nobody was worried that I wouldn’t get the project done, I was allowed to take my time learning what I needed to know. By the end of my first project I had learned so many things about Scala, Play Framework, MongoDB, and Postgres that I would never have learned if I had been told to just get it done quickly in the language I know best.

About halfway into my internship I had completed two “isolated” projects and was ready to begin working on the main codebase. I was excited to make the move; working in a production environment was my main motivation for seeking a CS internship this summer.   When I first opened up the main project I would be working on, I was floored by the thousands of lines of code I would need to read and understand. I spent most of the next week pestering other engineers with questions and I found that everyone was willing to take time out of their own work to help me get up to speed. Over the next four weeks I learned a ton about what it takes to develop an application of this size and how 30+ engineers can work together on a codebase. These are things I would never have been able to learn without being in this environment and having very helpful coworkers.

Summary

Although we learned a lot, our summers at SecondMarket were about a lot more than just programming. The company has an awesome culture that made sure we never felt bored or anxious to go home. Every Friday we got to look forward to a delicious “Foodie Friday” catered lunch and maybe some Xbox as people got ready for the weekend. We also took a Tech Team pizza tour of NYC which was both delicious and hilarious. Just last week we had our annual Summer Party at an awesome venue by the water. The whole office has a genuinely fun atmosphere that made us excited to come to work every morning.

Ah yes, its one of my favorite times of year again, and not because the summer is almost upon us, but because we have just wrapped up refactor fortnight #3. Its that time of year when we tell the business to take a proverbial hike and we in engineering afford ourselves the opportunity to take a close look under the hood of our code base and clean up some of the technical debt that has accrued over the last few months. As mentioned in a previous post, several times a year we spend two weeks grooming the code base to fix some of the short cuts we took or to change some things we didn’t quite build in the manner that with hindsight you would do differently.

In previous refactor fortnights, we took on some rather large tasks that we wanted to fix and we did tackle some of these type of tasks again. But this time, we also encouraged every engineer to add to the backlog things that were annoying them personally in the codebase. Things that they would love to fix, but due to business pressures weren’t able to get around to.

So what kind of things did we get around to fixing? Well a short summary would look something like this:

  • Refactored order model
  • Improved search queries
  • Restructured application contexts
  • Standardized ajax forms
  • Centralized google analytics integration
  • Standardized our view templates
  • Standardized our email templates
  • Centralized common static content across products
  • Improved testing across the platform
  • Streamlined our css across products
  • Removed our dependency on external repositories
  • Standardized formatters
  • Updated our coding standards
  • Upgraded some external libraries
  • Refactored inconsistencies in our database types

Given the above list of changes, it is fair to say that we touched a huge amount of code all across our codebase. Pretty scary to say the least, but due to the huge efforts we have made in building out our automation testing over the past few month (which provided a great regression suite), and the fact that we migrated to Git, all changes were made in a meticulously controlled fashion. The ultimate measure of this, is that no critical defects were found in production since we deployed the changes 5 days ago. All credit for this goes to the awesome Engineers and QA team here at SecondMarket who really did a great job on this refactoring fortnight.

- Michael.


A couple of months ago I asked an engineer to create a branch for a particular feature and merge it back to the main development branch when it was ready. I could see the disdain in his face as he looked at me as if I had asked him to do something obnoxious. And I had. SVN was our source control system, it was a legacy tool and as it never presented a major issue to warrant us changing it, we just continued using it. However modern agile engineering practices require us to quickly create new branches, switch between branches & merge branches and this is where SVN is incredibly cumbersome, so much so, that we decided it was time to change.

After a quick evaluation of Mecurial and Git it became apparent that both tools seemed to meet our needs with very little between them. Git seemed a little more complex up front, but it appeared to have more power and flexibility readily available, and as some of the engineers were fans we decided to migrate all our projects to Git. The migration process itself was pretty straight forward. We staggered the project migration over a month, moving one team at a time. We used a tool called svn2git, which allowed us to move all the source code, whilst maintaing all the codes previous commit history. After completing some dry runs, for each project migration, we asked everyone to commit  code changes on a convenient evening, ran the migration and when they got back the next day, we started to use Git.

As Git works in a fundamentally different manner to SVN, the team found it took a couple of days to get used to it. Git at its core is a distributed version control system, so the first thing engineers needed to do was get used to the concept of a local repository that is a clone of the centralized repository. The power of this, is that you can develop and commit code anywhere, including when you are offline, and later synchronize to the centralized repository. As you have a local clone, you can even get a previous version of a file when you are offline. Also, as everyone has a local repository, you can easily share changes with an individual engineer rather than having to commit to the central repository. This is very convenient when more than one engineer is working on a task.

But the real power came in its ability to branch & merge. Historically with SVN, branching was skittish. In Git, switching between branches involves running a simple command (git checkout branch_name) at the command line, and the most elegant part being that the IDE being able to handle it immediately. The days of painfully configuring each branch, within an IDE were over. Incredibly, merging a branch back is much simplified. SVN keeps deltas of each change between commits and during a merge a lot (or an awful lot) of false conflicts are highlighted. Git on the other hand keeps an image of the entire file for each checkin and manages to identify real conflicts far better. Our first major branch resulted in no conflicts to be resolved and simply took a minute, if we were using SVN, it could have been hours, and would have probably had some errors.

As we continued using GIT, we did run into some issues. On a couple of occasions we were just about to push features into staging for final validation prior to release, when we realized that a number of checkins were missing. Pretty uncomfortable to say the least. On another occasion, code that was not meant to be released was found on staging. This was even more uncomfortable.  Both these issues were caused by people forcing changes into master without fully understanding what the commands they were using actually did. I saw these as basic teething problems with people becoming familiar with a new source control system rather than failings in Git. To reduce the risk of these issues happening again, we established a set of guidelines for all the engineers which include details of how best to branch, how best to share code, how we merge etc. We also decided to lock down access to the master branch using gitolite allowing only SysOps and Tech Leads the ability to merge to master. This restriction is seen as a short term fix to allow us grow more comfortable with Git.

Here are some of the practices that we now follow:
  • When you start a new feature, create a local feature branch off of our dev branch
  • When a feature is complete, rebase+squash, then rebase from dev and merge into dev. (squashing keeps the history pertinent)
  • If you need to backup your source or share an incomplete feature, create a copy of your feature branch in origin (or any remote repo)
  • Ideally you should not rebase the shared branch until you are finished with the feature branch
  • If you need to rebase the shared repo, rebase+squash and create a branch off of the shared branch

In summary, whilst there is a learning curve, the transition has proved to be invaluable. Our plan is to be able to release code even more often, and when we do release code that it is of a high level of quality. Git is helping us achieve that goal.

- Michael


SecondMarket are going on a road trip. Yes, we are heading to Austin, TX for SxSW, where we will also be hosting SecondMarket House this weekend (March 10th & 11th). At SecondMarket House, as well as having some obligatory parties, we will be having a number of talks about our business and our technology.

Of interest to this audience will be the technical talks. The first technical talk on Saturday will be a talk about how to get started with mongoDB. This will be a presentation by Nosh Petigara, from 10 Gen and myself. At SecondMarket, we have been working with mongoDB for over a year and use it for different parts of our system. Along the way, we made some mistakes which I will share with you so you can avoid them.

The second technical talk on Sunday will be ‘Should your startup consider Scala?’. We have been using Scala for just under a year and our engineers are having a lot of fun with it. So much fun, we actually brought Bill Venners (co author of Programming in Scala) and Dick Wall into our offices to have a rare yet amazing training experience (btw - they are having a public course here in NY that I strongly recommend). In this talk, I will present what is awesome about Scala, some of the complexities and a judgement on whether or not it its the right technology for your startup.

Hope to see you there.

- Michael.


At SecondMarket, we have been using mongoDB for storing our asset classes and events/notifications for the last year. Mongo was selected for storing assets as our diverse set of asset structures are completely different, yet in mongo, they can all be stored in the same collection due to its schema free nature. In a relational database, each asset class would need to be represented as a separate table, or alternatively, we could store all assets in a single humungous table, that we would need to add columns to as we added new assets. Yuck!!!! Mongo also made sense for storing our events and notifications, due to mongos ability to handle large volumes of data and the ease of which this data can be sharded.

For anyone who has used traditional relational database and came to rely on liquibase for handling database change across environments, not having that tool available for mongo was a headache. This is why mongeez was born. Mongeez is a tool that allows you to modify the structure of your documents and replicate those changes in unison with your code across all your environments from QA to production. Imagine a scenario where you have 10 QA environments, with each team needing different versions of the your code on each environment and different versions of your mongo collections. If you wanted to change the structure of a mongo document, when would be the appropriate time to roll that out to your QA environments? Don’t worry, you’re not expected to know the answer:) What you want to do is, as you deploy the code that works with the new structure, you want to automatically update the underlying mongo structure at the same time. This is what mongeez enables.

Using mongeez is simple. You create a set of mongo javascript changesets that can do things like modify your collection structure or insert data (see below). As you write the scripts to modify your structure, you also modify your code that manipulates the underlying data.

<mongoChangeLog>
    <changeSet changeId="ChangeSet-1" author="mlysaght">
        <script>
            db.organization.insert({
              "Name" : "10Gen”, "Location" : "NYC”, DateFounded : {"Year":2008, "Month":01, "day":01}});
            db.organization.insert({
              "Name" : "SecondMarket”, "Location" : "NYC”, DateFounded : {"Year":2004, "Month":5, "day":4}});
        </script>
    </changeSet>
    <changeSet changeId="ChangeSet-2" author="mlysaght">
        <script>
            db.user.insert({ "Name" : "Michael Lysaght"});
        </script>
        <script>
            db.user.insert({ "Name" : "Oleksii Iepishkin"});
        </script>
    </changeSet>
</mongoChangeLog>

These changes are then committed to your source repository of choice. Mongeez has a utility that when ran on the startup of your application, it checks what changeSets have not been ran in that environment and runs them. This means that whatever version of your software is running on any environment, you will always have the appropriate code and mongoDB collections/data. We find this to be pretty neat!

Where can you find mongeez and get some code samples? Check out the wiki at mongeez.org and examples at Github. Also check out the presentation I gave at last months mongoDB NYC meetup.

-Michael


We are excited to announce the launch of the SecondMarket Engineering blog. At SecondMarket we regularly encounter and solve technical challenges that we feel are worth sharing with the community at large, hence the launch of this blog.

Over the coming posts we will provide insights into the technologies we use, how we use them, our processes, methodologies and our culture. Our objective is to be totally open and honest, so whether we get things right or wrong we can all learn. 

Join us on this journey, I promise, it won’t be dull:)

- Michael