News Stay informed about the latest enterprise technology news and product updates.

The key to startup culture: 'Work hard, play hard'

Film After visiting startups for the past few months to gather footage for my Startup Spotlight series, a few cultural commonalities stand out to me: a fun and relaxed atmosphere, the willingness to take a risk and passion.

Granted, when I visit and film people, the camera could have something to do with the level of excitement employees display. Still, all four startups faking it in the same way? I don’t buy it.

Besides, as a Millennial, I know that the drive to find a job that you’re passionate about, that allows you try something new and different, and, heck, even save the world, is a real thing. Naïve? Maybe.

But I see that drive in the people — young and old — who work at these startups. That’s why I think startup culture works and why larger companies want to implement it into their own culture.

One of the things startups seem to do well: Find a balance between hard work and fun.

I’ve walked around startup offices and seen employees talking on personal cellphones, lounging on bean bag chairs, riding a scooter, and even playing ping pong and foosball — all during “work hours.” I’ve also seen people bounce around on yoga balls right in front of the CEO. It didn’t faze him.

Sometimes I wonder how they get their work done. And my hypothesis is this: You how when you’re working and it becomes hard to concentrate? For most of us, it would look bad to take a 30-minute break and stare into space or surf Facebook or play games on your phone. Startups, on the other hand, seem to embrace the idea that inspiration and creativity come when they come. You can’t force it. But when lightning strikes, people work their butts off. If they’ve hit a roadblock, they take a break to get the creative juices flowing again.

Startups are also unafraid to experiment. They are willing to put everything on the line and fail. Because who knows? The idea or project could just work, and could be revolutionary. But they’re also willing to cut their losses either.

Patrick Surry, chief data scientist at the startup Hopper, a search engine that helps people get the best deals on flights, explained it best. For our CIO and IT readers, it’s worth quoting him in full:

“A lot of what we do at Hopper is figure out what the right way to position and deliver the solution to the problem is. It’s challenging — we build stuff, we throw stuff away, and then we build new stuff.

“It requires a certain kind of attitude I think among the developers. You have stuff you’ve worked on for three months and then we decide to throw it away and do something different. That can be frustrating for some people. And I think for others that’s part of  [the attraction].

“I think a lot of companies get bogged down because you’ve created something that sort of works and you have to continue to maintain it forever. I think as a startup you have the luxury of saying ‘Hey, that doesn’t work. Let’s try something else, both from a kind of business point of view but also from an infrastructure point of view.”

Startups may have more freedom to experiment than established companies, but the attitude is worth modeling at any company hoping to keep up in a rapidly evolving market.

The willingness to take risks and employee passion are the traits that stand out at the startups I’ve visited. Whether those traits result in a viable business, time will tell. In the meantime, those working at startups are excited about what they’re doing. They believe they are working toward changing the world. (And maybe they are.)

And I think that’s what dictates the startup culture. It’s not the bean bags, foosball, ping pong, or freedom to goof around. It’s that employees believe they’re working to make a difference.

Alan Berrey, CEO and founder of Scratch Wireless, a “Wi-Fi First” mobile provider, summed it up during my interview with him: “Look, Scratch Wireless is a blast. I can’t imagine doing anything else. I love it here, I love the people that work here, we’re having a great time together, we make a lot of fun of each other, we take a lot of things very lightly but we take also the things that are important or serious very seriously as well. And we really hope to change the model for the cell phone services throughout the world.”

Let us know what you think about the story; email Kristen Lee, features writer, or find her on Twitter @Kristen_Lee_34.

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

After NetFlow metrics are generated and stored in the cache, they are exported based on the active and inactive timeouts. The lowest possible value for exporting active flows is one minute and inactive conversations are exported every 15 seconds. This means that information about ongoing conversations is exported with a delay of at least 1 minute.

Not correct: timeout can be set to 0. This is called the immediate cache type in Flexible NetFlow
"According to Cisco, NetFlow export at 10,000 flows per second (fps) causes around 7% additional CPU utilization. At 65,000 fps, additional CPU utilization jumps to about 22%."

These numbers date back to a Cisco white paper published in 2007. The devices for which the measurements were made has been EOLed long time back. NetFlow generation technology has progressed since then and specialized hardware for this purpose is commonplace. As a result NetFlow generation has no impact on the device performance whatsoever.
The immediate cache exports each record as soon as created - One packet per flow.
This command may result in a large amount of export data that can overload low speed links and overwhelm any systems to which you are exporting. We recommended that you configure sampling to reduce the number of packets seen.
1 packet per flow !! I will stick to active and inactive.
This does not look outdated:
This blog too claims almost the same utilization.
Here's a WP dated back 2005: There is a 2007 version with updated numbers ( It's somewhat strange to see 6 or 8 years old measurements being passed as current in 2013 especially from individuals claiming supremacy in all things NetFlow.
Did you read the paper before you posted it? The cisco2811 is a router still available in the market and its numbers are near what is stated in the article. Are you claiming that the ASIC of the device has been changed in 2013 to give it better flow processing? The article from 2007 will still hold until you can show something from 2013 that states the CPU utilization is now hovering near desired percentages.
The CPU utilization stats are based on a 2005 paper. The paper states that for 65000 fps, unsampled NetFlow can use upto 22% on some devices. There are also findings about the around 16% on a Cisco 2811 for 65000 fps.

Original 2005 paper

Updated 2007 paper

As far as I know, all the devices listed in the whitepaper is not yet in EOL - which means the numbers still hold. I have also not seen an updated whitepaper with newer stats in 2012 or 2013. Are you using the "argument from silence" to state that device impact has improved and is now nearing 0% impact? And is that the basis of your sweeping statement of the article being technically inaccurate?

Further, if you care to read the article taking time, it clearly states sFlow can be used for anomaly detection but may not meet expectations: "If your anomaly detection is at the edge where every conversation is critical, sFlow may fail to meet expectations.". There are tools and users who use sFlow for security analytics. The post is purely based on real-world case and they wudnt use it nor a vendor support it, if its fat fetched right?
Thanks everyone for the feedback! I admit the CPU utilization stats are based on an old paper but then Cisco has not published an update to the NetFlow performance impact report since 2007. I must say we have seen improved performance on some of the newer switches, like the Cat 4500, 6500 or the 7600 router etc. of having only about 5% additional CPU load per 10,000 active flow cache entries. It’s definitley improved, but still demands some CPU attention. However, it is an easy price to pay, considering the completeness of the flows versus packet sampling.