At any point in your history, as a designer or developer, you have thought of a cool idea. This idea would be awesome if you could only get it out there but you never end up doing it or you do it and it fails. What could you do or could you have done to make sure your startup was successful?

Well, I’ve been down this path a few times and I’m going to share my most recent story in the hopes of helping someone avoid mistakes I’ve made or have witnessed in my startup attempts. As a point of reference, I’m going to use my most recent startup involvement: Ngame.tv*. Presently, I’m no longer with Ngame but my learning experience was vast!

Ngame.tv

Tekken 6 NA ChampionshipIn December 2010, Shane Latham approached me about doing some video work. I pro-bono’d some time and provided a player for the 2010 Tekken 6 NA Championship. It was pretty cool. This was my first direct taste of the gaming broadcast world and it was far from what we would attain in due time.

Soon there after he talked to me more about UFrag TV and asked if I wanted to join him to rebuild the site. It was an interesting opportunity but I knew it would be a gamble seeing as there were already issues within but all companies have issues and people can be replaced, as later happened. So I weighed the options and figured…hey…I have a shot to do something I’ve been wanting to do again, on a different scale, for a really long time: a live streaming platform. In the early 2000’s, before YouTube, I tasted online video and felt I could do it better. This was a chance to do just that but it was more focused on live streaming than VOD.

We went on to do the Tekken Worldwide Championship, which was pretty sweet. I got to see how serious people were about gaming outside of my lil’ crew I always game with. Shane, and crew, were doing a phenomenal job with the commentary and production of everything. I was impressed and started seeing some real potential.

There were a few other events we did while also working on the first revision of the site, which was to launch early 2010 [March if I recall correctly]. My goal with first revision was to be a slimmer version than the previous Ufrag site but scope creep took over.

Initial Launch

What makes startups succeed or fail? More than 90% of startups fail, due primarily to self-destruction rather than competition.
Steve Blank, post on the Huffington Post

This article came out recently and it hit the nail on the head. As we began planning v1, I was hard pressed to enforce a simple version with live streams only and basic registration. The team [Shane and David Paget, the CEO] agreed but felt certain things were required. Honestly, to an extent, they were right [especially regarding a quality chat system] but only a year later would I realize that.

To Much Too Soon

We blew it on marketing. UFrag had a community clamoring to get back on the site. We were under pressure to deliver for sure but it wasn’t that huge of an issue. Our problem was in giving them a firm date before having been through an alpha, beta, etc. I knew better but figured “this is a simple site…we’ll get it done.” WRONG. Scope creep took over plus my day job with Katapult Media required my time and soon came the month to launch and we were miles from ready. With a community ready to see the new UFrag, we talked about numerous times during some of the live events, they were quite disappointed when it didn’t launch. We heard it loud and clear but there was nothing we could do about it…it wasn’t ready.

During this time we had an executive change and for about 2-3 months things were in a complete disarray. David Paget was out as CEO. Shane now held this position, rightly so…it was his baby, and a new investor was on board: Vincent Reynaud [my favorite French dude; his accent is AWESOME! lol]. With drama so early on, it hurt our potential launch and seriously damaged my desire to continue but I pressed on.

We’re now Ngame.tv

With our new team together, we embarked on a refreshed goal of getting the site done. As a result, we changed the name from UfragTV to Ngame.tv.

We brought on a contractor to help out. There was some movement on the production side of things as well through a purchase of a Flypack. Things were forging forward. My time wasn’t too great since I had to focus on our new employee at Katapult so my main goal at this time was to get our contractor on the right path/direction for the code and platform.

It wasn’t until August, or so, when things finally smoothed out. During the summer a new business person came on board: Lachlan Wortman. His main goal was to get funding because the site was now 5 months beyond our target initial launch. With 5 months of ups/downs, business changes [new people, new name], we forged a plan to get launched. The site was getting there but every time we completed a list of features more stuff popped up [more scope creep]. A lot of it was worth it in the long run but every new feature meant X hours of testing and X*2 [or 3] hours of debugging, tweaking, fixing, or reworking if it wasn’t done right, which happened a few times.

The time had come to launch and we were laser focused on getting this done. Our contractor did some solid work but was not without issues. There were delays on both sides [ours and his] and beyond just the tech/code. I was back on track to do more work than directing the tech of the ship and a little coding to taking over and knocking out features, bugs, etc. It was time to fully tie this puppy to our CDN and we chose EdgeCast as our CDN.

Get’er Done!

Our new direction pushed us forward to tying a CDN into our backend code. With EdgeCast as our choice, we started the integration process. What a nightmare!

Being a provider of products/services myself, I’m verrrrry understanding, now, of companies failing in areas of their offering. I get it. You have to prioritize support, client requests, consulting time, etc while also improving your overall system quality, features, etc. This isn’t easy. Even terrible services have to do these things and, typically, these services are deemed terrible because they are failing at more than one of these, or similar, areas..

EdgeCast was the suggested CDN of a streaming media professional based on his assessment of our needs and available financial commitment. He was right. They fit the bill financially by providing a really quality, worldwide CDN at an astounding price. It looked very promising and it was.

Then Why Was it a Nightmare?

I used the word nightmare to describe the working relationship, not the service…per se. Our needs were something EdgeCast had not seen:

  • Dynamic live streaming with secured access
  • High volume post-recording video processing [on our servers]
  • Accurate live stream status and viewer counts
  • Post-recording notification

Those four things were out of their scope, in a sense.

Dynamic live streaming with secured access

They provide a mechanism where you can protect your account by providing stream keys. This was perfect, in initial documentation reading and testing, but not usable by any means in our system. We needed a user to sign up, get a stream key, and be able to publish instantly. Their system allowed for stream key setup immediately but it took 45-60 minutes to provision. That would have been a major fail and the community would have laughed their way right back to Justin.tv, UStream, or their desired network. We could have provisioned 1000 [random number] and assigned them accordingly but they were unsure of any limits it had and deleting these keys were not instantaneous. It also tied us EdgeCast so moving away from them would have meant rewriting our authentication mechanism.

This was one piece of the system that took time to work around but we eventually did. Turned out we had to provide custom Flash Media Server code to call our private API server, authenticate the request, and return a result. This meant building an API and stream key system. It wasn’t trivial. This took a couple months between delays on both sides, testing, debugging, fixing, adjusting, rinsing and repeating but it ironed out in December of 2010 after I further took matters in my own hands.

High volume post-recording video processing [on our servers]

We never were able to fully implement this and it hindered us drastically. EdgeCast could not provide us with a sure-fire way of getting our VOD content to our servers in a timely manner after the recording was complete. If you consider their position, they provide VOD and live playback. VOD is where the dough comes in, mainly storage. We wanted to cut them out of that part since it was entirely too expensive, as we found out, but most importantly to innovate on our platform so users could get their desired features.

For the most part, I understood their issue but a solution was still needed. In essence, there was no way for us to know whether a video was ready to transfer or not. We could have used rsync or FTP but could end up with file fragments since once a recording was completed it was FTP’d [internally at EdgeCast] to our origin server [the main CDN server for our account on EdgeCast]. This meant the file would show up on the origin server FTP but not the entire thing, just like with FTP’ing any file. This prompted a polling FTP approach that ultimately failed horribly.

This wasn’t desired by far. Polling means we, in this case, continually check the origin server for new videos. Once one was available, we needed to verify if the file was completely transferred, mark it in our system as VOD, then transfer it. The first part of that is the problem. There was no way to know if a file was complete other than checking the last known file size vs the current file size. This works but not across multiple terabytes of videos, which we shot to very quickly after launch. :-/

Needless to say, not being able to be notified of a video being available completely prevented us [as a small company with limited budget; a larger budget might have allow some other solutions] from providing things like video thumbnails, multiple file formats [ie – iOS playback], and lower costs for VOD storage.

Accurate live stream status and viewer counts

This wasn’t detrimental but was a need. Unfortunately, their API was broken, literally, and it took the better part of 45 days for them to provide a fix. This went from late October/Early November to mid-December 2010 resulting in us not being able to use the designated API, because it still wasn’t fixed. Instead the provided a text file with a list of live streams. Easy enough. We loaded the text file, parsed it, and updated our database.

Once this was in place, you could use the full system we worked so hard in perfecting. There were problems with the text file [duplicate videos, some videos not listed, etc] and, through support, those were finally fixed. Eventually we removed the text file limitation in favor of a more custom solution I worked up through more custom Flash Media Server code. It was yet another attempt at decoupling us from EdgeCast in favor of a custom solution we had greater control over.

We were without a live viewer count though. It wasn’t a major fail with the community, at launch, but we found it to be an excellent feature once we provided it. This was another area where their API just wasn’t ready to provide us the relevant information. They eventually improved it and I quickly integrated it over a weekend, post launch.

Post-recording notification

This is somewhat a duplicate of the above but was a separate concern. We needed to be notified of a recording being completed so we could remove them from our homepage. The text file, mentioned previously, was refreshed every 3 minutes. This meant at any given time a dead stream [previously live but now offline] could be on the homepage for 2:59 seconds. Users would be confused and the site just felt outdated.

The custom code I added [mentioned in the previous section] provided us with real-time [within milliseconds, seconds max] notification of live streams going offline. I could watch the logs and see everyone going live or offline. It was pretty cool to see our volunteer community staff stopping and starting streams while the player and homepage would adjust accordingly. Sweet stuff but it took time to get right. It never reached perfection due to some tech issues where our API would fail in a stop or start request so the video wouldn’t show up or wouldn’t be removed. Our admin provided a way to easily remove these videos though.

EdgeCast issues were fixed or worked around and the site was pretty much feature complete…sort of. VOD playback could take minutes to kick in since the videos were requested on an edge server and if they didn’t exist there the edge would trickle back to the origin server until it found it then transfer it to the edge. This made VOD virtually unusable to many [coupled with other integration issues blocking us from innovating on our platform] and would later make people loathe our choice of using EdgeCast and our inability to provide simple things like fast VOD playback. At this point we’re pushing into January of 2011 and a launch is imminent.

Beta Launch

I’m a HUGE proponent of beta programs. The longer the beta the better but more important than length of time is quality. In February 2011, we launched the beta on Shane’s birthday. It was an insane day because I was taking my wife out for a Valentine dinner. Due to poor planning, I literally built a beta invite system within 20 minutes [otherwise the site would have been open to the public], giving me 10 minutes to get it uploaded, debugged, and ready for use before my wife cut my head off. We were already behind schedule and I was coding in the snazziest clothes I have ever written code in. 🙂 The invite system took about 40’sh minutes after all of the bugs were out and we were ready for beta.

We got it launched, privately, but it wasn’t without issue. Feedback came in, people ranted/raved about great video quality, etc and about things they couldn’t stand. The main thing was our requirement to create a recording in the system before hitting record. This was due to EdgeCast limitations but, as with the other issues, I found a work-around and fixed this before we launched. Shane was right yet again, as he was often when it came to features.

After a few weeks in beta and some solid work time spent fixing bugs, adding minor to medium features, changing stuff, etc…it was feeling like we should launch. We had a problem though: our beta testers were mainly publishers.

Our streaming approach was worked out but we didn’t focus on handling the many viewers that would come in and watch these streams. Our fabulous chat system, written/provided by Mark Vaughn, was humming away by now and provided such an astonishing change to the Gigya chat we were previously using, which was plagued with being a “pre-event” configured chat [set it up for one thing before an event, not dynamically create rooms]. Everything was in place except our load testing.

Load Testing

Prior to beta, I put a feature-freeze. I didn’t hold true to it like I should have. The whole time during the beta I should have been solely focused on cleaning up code and testing load. I wasn’t well versed in load testing or linux sever management so I needed more time to investigate/test. Instead of load testing…I wrote more features, etc. 🙁 Bad move. Really, really bad move!!

Since we were so far behind our self-imposed timeline, money was running low. The production side [and a few other things] ate up our budget so our focus changed from quality of service to getting funded. MAJOR MISTAKE! Quality over quantity/speed should always be the focus but we lost that towards the end.

With a low tech budget [less than 1/5th the company budget], I had to find alternate ways to do things. No worries…I learned to personally live on a budget years ago but man was it tough cutting corners. Ultimately, we blew it, IMHO, by not focusing our financial arm solely on the site [more on that later] but hindsight has perfect vision. Either way…it was time we put this puppy into the wild and let it roam!

The Launch

After well over a year of time invested, tons of business drama, mounds of tech issues, and a shortening budget…it was time to launch. I got caught in the hype of the launch and having failed a year ago [March 2010] to launch, I couldn’t dare pull the carpet from the community again. We couldn’t afford to wait any longer financially either. That’s a recipe for disaster and that was our launch gift to ourselves.

I hadn’t performed any serious load testing on the site, due to features piling up, until days before the launch, literally like a week or so. This made me nervous. I mentioned it to the crew as a reason I wouldn’t launch but didn’t put emphasis on it because I truly thought we would be fine. It was crunch time and we were going to get this puppy out there by any means necessary.

The Day is Here

Wow…this highly anticipated, live streaming gamer platform is going live. I was pretty amped. Shane chose a cool date/start time: 3/11/11 @ 11:11 AM PST. I’m CST so that was 2:11, which isn’t nearly as cool, but EST was cool: 3/11/11 @ 3:11 PM EST. 🙂

I was nervous about it since I saw crazy wild numbers when running the top command in Linux. It showed the RAM was spiked in simple load tests just days before. My questions in the dev community calmed me as it could have been a number of things. Turned out…I should have taken heed. Instead I told Shane about it but said it should be fine since the CPU at 300% [3 processors all at 100%] didn’t affect the site loading and there were possible other causes. I addressed some of them and we got ready to launch.

Pushing the Button

About 1:45 to 2 PM the whole crew jumped on Skype. It was like a big party. Virtual high fives were thrown around with numerous smiles, jokes, and tons of laughter. It felt great. I was hard pressed to make sure all DNS, etc was ready to roll for the switch.

At 2:11:00 PM CST…I pushed the button.

OMG

At 2:11:01 PM CST…we crashed.

My plans were to push the button, hang out for a bit, and take the day off to rest. It wound up being: push the button and work for about 40-44 of the next 48 hours. It was a very dark and gloomy weekend, emotionally and physically. The site was up and down for the better part of the weekend.

We had over 10,000 people hit the site the very instant we went live with each of them constantly refreshing to get access. Those with access clicked around like hungry mice in your pantry looking for grub. The 10,000 people created an unexpected number of page requests [which I don’t recall the total of the top of my head]. I spent hours trying to figure out what went wrong, tapping into my dev friends/mailing lists, etc to figure this out. I couldn’t.

So, on Saturday…I rewrote the site completely. Sunday I finished it. I say completely but not really. My models [database interaction code] stayed the same, mostly, but the views [resulting output] and my controllers [what handled the page request, pulled the data, and passed to the view] were mostly gutted. This helped but not for the reasons I thought. The community was informed of a “memory leak” but that wasn’t technically true. It spread quickly so I never tried to retract it although I did make sure the team/staff did not provide technical answers without checking with me first, so misinformation wasn’t provided by anyone.

We also threw more RAM at the MediaTemple server. This helped as well since we were on a lower RAM (dv) system [remember…cutting corners to save dough]. I also setup another MT server to be a database server only. This helped as well but we still were not out of the dark.

The site then went into a semi-up state for the next few weeks/month while I neglected my Katapult duties and family to fix Ngame. This wasn’t a fun time. I was horribly stressed/pressured and my family was suffering ’cause of it [seeing as 12 hours of work meant a break before the other 4 or 8]. Every day I found myself trying to triage the issues, restarting Tomcat or Apache, and/or fixing bugs while also attempting to innovate on the platform.

My faith in ColdFusion, our chosen backend, was waining. In my, at that time, 10 years of CF dev there had not been one case of such horrific performance. I’ve done some heavy data/reporting stuff for Sprint in CF 7 [maybe 6] and it performed beautifully. This was Railo, an opensource alternative to Adobe ColdFusion, so of course I thought maybe that was the issue. It wasn’t, as I found out later.

I finally got a hold of all of these issues by throwing some internal caching into place with memcached [kudos to Railo for such a simple implementation model], moving the site to Amazon AWS and chunking a few small servers behind a load balancer. The servers were also setup by Railo professionals after they reviewed and found areas of improvement in our old MediaTemple setup.

To the Cloud!!

The idea was to have multiple servers powering the site. This helped tremendously, I thought. False positives are so fun, aren’t they?

By now Ngame was a joke to many but a home to more. We had a loyal following building up and people were truly enjoying the site and praised our video quality over our competitors. Twitter jokes/negative references calmed down and all searches for “ngame” were of people streaming. Numerous gamers left Justin.tv to come over for a variety reasons [mostly due to their crazy policies of banning people, as they told us]. Our numbers were up and, by them not being the same as launch weekend, everything looked very promising. Even Justin.tv started meddling with our users trying to find out why they liked our site, started banning people for mentioning Ngame in the chat, etc. Not too long after they would launch Twitch.tv. They probably won’t say we had anything to do with it but I know for sure we shook up their world by making them take notice of a potentially large competitor for gaming. That felt good…really good.

Eventually it started sailing smooth across the Interwebs. This was exciting. My goal was to focus on tightening down the current code and improving on what we had. I’m a huge proponent of perfecting what you have then incrementally adding new/cool stuff, sort of an iterative approach. The team/staff didn’t feel this way, for the most part and rightfully so in some arguments. It was a numbers game. The goal was to get as many users as possible so we have to have feature x, y, and z to compete with the few big, established competitors we had. This is a peeve of mine [targeting feature for feature just to win users]. I caved and went to town on new features and bug fixes, launched them weeks/month or so later, and BAM! ISSUES! Ugh!

AGAIN?!?!?!?!?!?!?

I got sick of hearing “site’s down”. I mean…we’re on new servers, database is on Amazon RDS set to multiple availability zones [auto-scale for the database], multiple servers behind a load balancer, and some [very] minor caching. Ahhhhhhhhhhhhhhh!!!!!!!

This interrupted my rodeo day with my Dad, who is a cowboy to his bones. I talked Shane through how to reset the Amazon servers and went back to my family. I refused to pull the family away from memories. Later that day/weekend, I looked into setting up a better way to scale the servers.

With auto-scaling in place, this meant we could go from 2 small instances to 14 without manual intervention. Great, right? WRONG!

You see…the cloud is not a magic bullet for scaling problems. As Sean Corfield said in a private conversation one day: “Most people do not have scale issues.” What he was saying was many people are not really doing anything of real scale. 10M requests/month is not major scale so our little 50k/mo was far from a scaling problem, per se.

If That Didn’t work, What Really was the Problem?

If there are 14+ servers, multiple hundred [no more than a thousand] allowed database connections per server, and thousands of requests per second…your database is going to hiccup if not properly optimized. Ngame’s gagged.

We explored database issues while on Media Temple but I missed a major need: indexes.

Have you ever tried to look for a book in a library without referencing some system of order [an index]? You’ll find your book but I’m sure it will take quite a while to do so. This is how a database works as well. A query asks the database for some data, the db looks for it, and returns it. The looking is where Ngame’s database was horrible: performance.

You see…I’m not a database administrator [dba], never claimed to be. I can create databases, tables, columns, views, stored procedures, queries, etc all day. What I cannot do is optimize any of them to anything close to perfection. This is why I stick to what I do best [backend/front-end coding] but in this case, since I could get it done and not cost the executives any loss of equity or the company any more money, it fell on my shoulders.

For years I worked with, who I regard, one of the best known secrets in the DBA and .NET world: Cody Beckner. I tried my best to get him on the team, he was willing to work with us, but there were major hurdles in getting me a team to work with [mainly logistics and agreeing on percentages]. Not having a development team was, by far, the biggest failure Ngame brought on itself but I digress.

Cody and I have worked on major projects in CF, .NET, and otherwise. I focused on coding the backend/front-end and he the database/backend [when in .NET]. This is a beautiful relationship to work within because the dude is a rockstar developer by every measure and I NEVER worried about optimizing a database. He taught me the little I did learn about databases and made me good enough to be dangerous.

Over all of the time Ngame spent with issues, basically March until even today, they were caused mostly by database problems. Thousands of error emails were sent from the site each week and, if I were to take an uneducated guess, I’d say 90% were database [70-80% if I’m being conservative]. Remember all of those connections I mentioned earlier? Yep…those were the majority of the emails [connection timeouts due to slow queries]. There were some legit errors too. I’d mostly knock those out pretty quick though. Software has bugs…it is the name of the game. No software is perfect, otherwise you would never have dot releases.

With a faulty database setup [not the schema but no indexes], the site suffered. Consider a funnel. If you put a lot of marbles [people], the funnel backed up [pages slowed down]. If you threw way too many marbles in the funnel, some would fall out [the dreaded 400×500 errors]. Slow page requests are the same way. Too many slow page requests and your site will go into an “over capacity” state. Maybe Ngame should have used a whale like Twitter for our capacity failure moments. 🙂

All of this drama caused a lack of faith in the site being viable and the blame was placed on ColdFusion, mostly, by the staff and soon the community.

Defending ColdFusion

In May 2011, I lost the confidence of the staff in the tech. They started doubting ColdFusion, as I once did, due to constant performance issues and a few outright called me out on saying the major issues were database. I actually planned to rewrite the entire thing in PHP+MySQL to lower costs and get from under the JVM but other things [see below] halted my involvement.

It wasn’t until a chance day in June where I was notified of the Gamers page being down that I was able to provide them proof of the database being an issue. By now I had read up on indexes, added some to the database [saw pretty solid performance gains], and even had Cody help me on a few indexes/optimization points. So when this issue came up, I popped into Terminal [on OSX], dialed into my local copy of the ngame db, ran the query and it took 34 seconds. Yes, seconds as in half a minute for one query.

This was ludicrous! Thanks to my High Performance MySQL book, I was armed with tools [knowledge and the command line] to see what was going on. After about 10-15 minutes of testing, I fixed the issue and the query went from 34 seconds to 150 milliseconds by simply adding the proper indexes to the database. I’m not kidding. Without one single code change, I added the indexes to the live database, clicked the dead link [gamers page], and voila. The page loaded super-fast.

Of course I instantly went to the staff chat [on Skype] to exclaim this victory over the database…and to defend CF. A bit of “told ya so” without trying to be petty. It just validated my battles with the staff regarding CF being a poor choice for the platform.

Turns out, CF was humming along without issue. The database was grinding to a halt almost daily during peak hours.

The Cloud Isn’t a Silver Bullet

Bad code in the cloud or elsewhere is still bad code, whether it is in the database, backend or front-end. Do not think going to the cloud is some magical, silver bullet. It isn’t. It simply means: virtual servers capable of scaling faster than dedicated hardware.

Last Stand

So we’ve seen ups, downs, and ok times. Fortunately, through all of this we had a really solid community. I mean these people were top notch fun. I thoroughly enjoyed getting to know the many of them as I liked to frequent rooms and just see what was going on. My favorite by far was the insane guitar work by Joe33345. Dude made it look REALLY easy on the hard levels. Insane! That’s not to knock some great shows like The Nerd Junkies, RCWF, Mistress Keo, Lirky, or my newest gamer friend [one of the many great things I got out of Ngame] Goku’s Angeal show. I’m drawn to music so Joe33345’s show was pretty cool.

This prompted the attention of some folks for funding. We turned down a couple offers. They didn’t fit the bill but would soon return to one of them.

Moving On

Tensions were tight and mid-May prompted my unexpected exodus from Ngame. I took 2 weeks off from Ngame work just to get away but most importantly due to a ridiculous dispute in an executive meeting. The business wasn’t operating at peak performance, words were said, and I chose to step back until some demands [namely to fix the business operating style; not any personal demands] were met.

WOW…life! I forgot what it was like to be free at night, work freely on client work without discussing an outage/feature/bug, etc. After 4 days I knew I needed to leave Ngame. It was somewhat sad but I knew it was best. My family knew me again and my clients were starting to get satisfied at a better rate than before, don’t forget Katapult still fed me as Ngame was equity only [meaning no money directly to me unless we sold or got profitable].

After two weeks, the staff wondered if I’d ever return. Shortly thereafter, Shane orchestrated a deal with FilmOn.com to purchase Ngame. In June/July’sh the deal was closed. They have their own tech crew, CTO, etc and were planning to rewrite the site anyway to put it on their Battlecam.com platform so I wasn’t needed. It actually worked out perfectly because after a month, two weeks more than initially stated, my demands still weren’t met and I was only around to help where others couldn’t. It was a mutual departure although I had a new desire to rewrite Ngame from scratch, having learned so much during the process of the first one. That window had closed.

Since I had equity, my percentage still held true so we signed the deal and life moved on. Soon you will see the revamped Ngame site and gone will be the good ol’ startup I helped build.

Lessons Learned

What did I learn here? Well, mostly to stand my ground and don’t compromise but there were other lessons as well:

  • From jump I required a contract but let it slip in timing.
    Never do this!
  • I let slide the idea of a simple v1 and missed a deadline.
    Either do not state a deadline or stick to your original v1 plans.
  • Then I allowed equity percentages to get setup in a way that caused execs to not want to reduce their percent to add equity-only team members.
    You should not hold so fast to N% of nothing. Be willing to give up a little to make that much more.
  • Let go of all business related tasks [brushed most off as “I disagree but do what you wish”], since I was focused solely on CTO, only to find out no funding was being attained and we blew through our initial investment on production equipment that didn’t make us any money close to what the equipment price.
    If you have business experience, don’t just be a tech. Speak up, if you have voting rights, otherwise you might just waste a year or more of your life in a dead-end business.
  • I let slide my need for a team and championed it all by myself [aside from some initial help through a contractor] only to fail in the process over and over with wins in between.

I highly suggest any developer or designer who is offered equity in a blank startup [one without a product/launch to its name] be sure the following is in place [these are general to a startup, not specifically Ngame, as is the previous list/commentary]:

  1. Legal counsel
  2. Accounting
  3. Documentation/paperwork outlining your agreement and everyone involved
    Do not work without a contract, whether it is contracting or equity.
  4. A business setup [llc, s corp, etc]
  5. A clear direction
    Companies should focus on one thing, get it right, then move on. Large companies like Apple get this: release the iPod, then the iPhone, and finally the iPad. They could have done all three at once but by focusing one at a time [for multiple years, mind you] they could get max potential from their products and maximize on mindshare in each category.
  6. A team
    I don’t care if you know about databases [optimizing them too], can design, build the backend/front-end, and manage servers. Get a team. Do not be afraid of giving up a little equity to make sure your product is successful. Would you rather have 90% of nothing or 60% of something big?
  7. Good leadership
    This is priceless.

What Was Ngame Missing

There isn’t one single thing I could say was missing on the business side. Many things, as you can see from above, were going on and the boat rocked quite a bit for a pre-product company. If I were to sum it all up in a word, I’d say we missed direction. The company tried to serve to masters [divisions] way too early: production and web.

On the tech side, I can absolutely say what was missing: a team. From jump I stated I needed a team. Shane, for the most part, agreed with me but getting everyone to agree on proper equity percentages and whether the positions I needed were necessary was like pulling teeth. I had people lined up numerous times to come on board to help forge this project forward and take my responsibility level from “do it all” to “focus where you excel and direct where you don’t”.

What was Gained

Aside from the above lessons, I learned a ton about performance testing on database queries, a dangerous amount on load testing servers, and the drive to do it again…and I am [very soon].

Friends. I mentioned it already but it is worth mentioning again. Knowing Shane, Vincent, Lachlan, Mike [Goku], Mark Vaughn, “Lainy Lain”, Nox, Surge [v1 and v2; same person, lol], EpicNiki, Jordan, Mistical, bkmelendez, HaVoc, Foom, Nerd Junkies, Joe “The Drummer Man” [my nickname for him], Nickbuc, Simon, Socal, Xenos360, John Machete, and many more makes the whole experience worth it. I don’t burn bridges and I remember those who worked hard for and with me, directly and indirectly. I’m indebted to each of them in some way.

Ngame did a lot right and Shane is a great dude plus he knows gaming without a doubt. He can’t beat me at Marvel vs Capcom 3 but don’t hold that against him. 😉 It was an amazing education and I’m grateful for the great times we had and the bad. I’m better for them both.

What’s Next for Ngame

The v2 site is currently being developed. As of this post, my work is still on display but is soon to be replaced. Ngame will live on under the direction of FilmOn and Shane. I’m still very interested in where it goes as I now see it as my baby and want it to succeed.

What’s Next for Me

Code. It is what I do and I’m going to keep rockin’ it. These lessons were invaluable, more than any college degree could have taught me, and I will apply my newfound knowledge accordingly.

As for the next startup….hold tight. 😀 It is pretty sweet and coming in 2012.

Sound off in the comments. I’ll answer almost any question. There are some things that aren’t necessary to discuss though but feel free to ask anyway. If it isn’t within my realm to answer, I’ll politely let you know. 😀