Quantcast
Channel: Hacker News 100
Viewing all 5394 articles
Browse latest View live

How a Silicon Valley outsider raised $30M in 30 months | Inside Intercom

$
0
0

Comments:"How a Silicon Valley outsider raised $30M in 30 months | Inside Intercom"

URL:http://insideintercom.io/silicon-valley-outsider-raised-30m-30-months/


I’m excited to announce today that we’ve raised a Series B round of $23M.

It was led by Bessemer, investors in iconic tech companies like LinkedIn, Yelp, Skype, Pinterest, Box, Twilio, Shopify, and many more, with participation by our Series A investors The Social+Capital Partnership. Ethan Kurzweil has joined the board alongside Mamoon, Des, and myself.

We founded Intercom 30 months ago. Before then, I was living in Ireland and knew very little about raising venture capital. We’ve now raised a total of $30M over three rounds. These are the lessons I’ve learned.

Aggressively expand your vision

One man’s Facebook is another man’s “online directory for colleges”—which is how Zuckerberg described his new service initially. In the defining days of any new category, there are dozens of people who happen upon the same fundamental idea. But very few have the capacity to see the true potential for it beyond the obvious.

Your vision is the ceiling for your company’s potential. You’ll never be a billion dollar business if you’re not deliberately working to get there. And in venture capital, it doesn’t make sense to invest in anyone who isn’t at least trying to build a business that size.

Intercom is our contribution to Internet innovation. Internet technologies are still catching up with how humans interact offline. The majority of progress in this space is on the consumer side—Facebook, WhatsApp, Snapchat, et al. Intercom is bringing this to business. It’s a seamless, lightweight way for the whole company to connect personally with their customers. The incumbents haven’t innovated in over a decade. In fact, the separate helpdesks, email marketing tools, feedback products, and CRMs have only become more complex. These disconnected services cannot provide a holistic view of the customer. And as a result, the customer’s experience is very disjointed. Our vision is to be at the center of all customer communication for all kinds of Internet business, which increasingly every business is becoming. We’re dedicated to going all the way with this.

Focus on engagement quality before engagement quantity

Startups talk a lot about traction as a measure of their potential. When they do, they roll out the biggest number they have—like the number of people who’ve blinked at their sign-up form. “Meaningful traction” might be something stronger—like an actual sign up. Yet that’s still missing the point. A better measure is engagement. How intensely are individuals or companies using your product?

Venture investors assess deals on the presumption that the vast majority of the market for the product is untapped, and that it’s extremely large. Otherwise it’s not a venture-stage opportunity with high growth potential. And so the number of customers or users you have will always be expected to be small in the grand scheme of things. However, you’ll need to show that those customers are highly engaged. That they’re extracting a lot of value. A small number of such data points—which for some companies could be as low as 10 customers—will help you demonstrate the potential that your product could have in the broader market.

We now have just under 2,000 paying companies—small teams like Circle and Visual.ly, and larger companies like Heroku and HootSuite—and annual revenue in the millions. But more importantly, 65% of our monthly active users log in to Intercom every week, and 35% of our weekly active users log in 5 days per week. And while we’ve a lot to improve, customer satisfaction is off the charts.

Hire out of your league

I’ve heard the idea many times that goes something like: “angel investors bet on people, seed investors bet on product, Series A investors bet on a business…” It’s true that at a higher valuation and raise amount, and with more time spent on the business, there will be increased expectation of progress. But what shouldn’t be drawn from this is any idea that team becomes less important, at any stage. Even public companies’ stock price is affected by investor confidence in the management—study the rise and fall in the price of Yahoo! and Apple since their changes in leadership, respectively.

Mamoon once said to me: “My hiring philosophy is aim high. Get people you think you can’t get. Shock people that you were able to get a certain person.” This is the only approach that results in a step-change in average talent amongst a team. The opposite is a plateau in team quality—getting bigger, but not better. Great people attract great people and companies who adopt a culture of always stretching to get the very best have been shown time and time again to be the most successful. Furthermore, experienced leaders build significantly bigger businesses—“the typical successful founder [has] at least 6-10 years of industry experience”.

We started 2013 with 13 people and finished with 47—all uniquely amazing in ways I’m surprised by daily. In May we announced we had hired ex Google and Facebook manager, Paul Adams, now our VP of Product. And I’m delighted to announce today that we’ve hired ex PayPal and Yammer exec, Mark Woolway, as COO. Mark now runs finance, operations, legal, HR, recruiting, and administration, and has had a tremendous impact in only one month. He’s certainly out of my league—he’s worked for three CEOs in his career: Peter Thiel, Elon Musk, and David Sacks. For PayPal he helped raise over $200M in venture capital, led their IPO, and eventual acquisition by eBay for $1.6B. He was Managing Director for Thiel’s hedge fund, Clarium Capital, at its peak worth $8B. And as Executive Vice President at Yammer, he raised $127M and led the deal in which Microsoft acquired the company for $1.2B. People like Mark make everyone up their game, and raise the bar for what success looks like. I’m thrilled to call him a colleague. And by the way, we’re looking for a whole lot more.

Know your strategy and find investors who believe in it

There are many things we call “technology companies” whose unique value is not in fact defined by technology. This does not take from their achievement or worth to the world, but it can be useful to think about in terms of your own business strategy. Groupon, for example, is a very valuable company, yet its innovation is in its business model. It’s a promotions company with a web site—nobody uses Groupon for their amazing tech. The same can be said for many businesses with incredible sales or marketing.

Intercom does not have an interesting business model. We charge money for people to use our product. And we do not have incredible sales or marketing. We have no salespeople and made our first marketing hire one month ago. What we do have is an innovative product. Something that people can’t get elsewhere which does unique things for them. All of our value is in this technology. And our double-digit monthly growth comes from people liking it and sharing it with their friends. Our company will become more valuable mostly by investing in product. Making it better for those who use it today, and allowing it to be used on more platforms, in more markets.

When you can articulate your general business strategy, it’s far easier to know which investors will be most excited by your deal, and to make your partnership with them a long-term success. Mamoon agreed from the start that the bulk of our capital should be spent on product. On a Sunday in late December, the day before I pitched the Bessemer partnership, I met Ethan for a coffee. I suspected since my first interaction with him that we were on the same page, but I really wanted to hear him say it… I asked: “So if we do this deal, what are the kinds of things you think we should spend the money on?” He said quickly: “Product! I don’t think for the next five years you’d want to stop investing aggressively in product.” The right investor won’t ever want you to be something you’re not, because they’re investing in you for who you are and what you believe in. That’s your strategy. It defines the opportunity and makes you far easier to bet on. I feel so fortunate to have investors who support our passion for simply making an amazing product.

Thanks

It’s fair to say that fundraising is a pseudo-achievement. It doesn’t in itself create value. That comes from what you spend the money on. Yet it’s worth celebrating because of what you really have achieved to be able to convince smart people to back you. Today I’m celebrating the hard work of my incredible co-founders and teammates, and the courage of our early seed and Series A investors. But most importantly I’m celebrating our customers who’ve supported us and paid for our product in its infantile state. Thank you, thank you, thank you. We do this for you.


Maybe the Most Orwellian Text Message a Government's Ever Sent | Motherboard

$
0
0

Comments:"Maybe the Most Orwellian Text Message a Government's Ever Sent | Motherboard"

URL:http://motherboard.vice.com/en_ca/blog/maybe-the-most-orwellian-text-message-ever-sent


Ukraine's protests, now under cellphone surveillance. Image: Wikimedia

“Dear subscriber, you are registered as a participant in a mass disturbance.”

That's a text message that thousands of Ukrainian protesters spontaneously received on their cell phones today, as a new law prohibiting public demonstrations went into effect. It was the regime's police force, sending protesters the perfectly dystopian text message to accompany the newly minted, perfectly dystopian legislation. In fact, it's downright Orwellian (and I hate that adjective, and only use it when absolutely necessary, I swear).

But that's what this is: it's technology employed to detect noncompliance, to hone in on dissent. The NY Times reports that the "Ukrainian government used telephone technology to pinpoint the locations of cell phones in use near clashes between riot police officers and protesters early on Tuesday." Near. Using a cell phone near a clash lands you on the regime's hit list. 

See, Kiev is tearing itself to shreds right now, but since we're kind of burned out on protests, riots, and revolutions at the moment, it's being treated below-the-fold news. Somehow, the fact that over a million people are marching, camping out, and battling with Ukraine's increasingly authoritarian government is barely making a ripple behind such blockbuster news bits as bridge closures and polar vortexes. Yes, even though protesters are literally building catapaults and wearing medieval armor and manning flaming dump trucks.

Hopefully news of the nascent techno-security state will turn some heads—it's right out of 1984, or, more recently, Elysium: technology deployed to "detect" dissent. Again, this tech appears to be highly arbitrary; anyone near the protest is liable to be labeled a "participant," as if targeting protesters directly and so broadly wasn't bad enough in the first place.

It's further reminder that authoritarian regimes are exploiting the very technology once celebrated as a vehicle for liberation; last year, in Turkey, you'll recall, the state rounded up dissident Twitter users. Now, Ukraine is tracing the phone signal directly. Dictators have already proved plenty adept at pulling the plug on the internet altogether.

All of this puts lie to the lately-popular mythology that technology is inherently a liberating force—with the right hack, it can oppress just as easily.

 

Reach this writer at brian.merchant(at)vice.com and on Twitter, at @bcmerchant

 

More Dystopian Drift:

With Unprecedented Inequality, the US Looks More Like a Dystopia Than Ever

Free the Network

Things Are Getting Orwellian at Exxon's Arkansas Oil Spill

Edward Snowden Says Our World Is Worse than '1984'

Eric S. Raymond - clang and FSF's strategy

Apparently It’s OK For iOS Apps To Ask For Your Apple ID And Password – Marco.org

Docker Raises $15M For Its Open-Source Platform That Helps Developers Build Apps In The Cloud | TechCrunch

$
0
0

Comments:"Docker Raises $15M For Its Open-Source Platform That Helps Developers Build Apps In The Cloud | TechCrunch"

URL:http://techcrunch.com/2014/01/21/docker-raises-15m-for-popular-open-source-platform-designed-for-developers-to-build-apps-in-the-cloud/


The shift to scale out architectures and an app-centric culture has turned out well for Docker and its lightweight open-source “container” technology designed for developers to quickly move code to the cloud.

That’s evident in today’s news that the company has raised $15 million in a Series B round led by Greylock Partners, with minority participation from Insight Venture Partners and existing investors Benchmark Capital and Trinity Ventures. Also participating is Yahoo! Co-Founder Jerry Yang, who has participated in previous rounds.

Docker will use the funding to push toward the general availability of the Docker environment, develop commercial services that pair with the open-source technology and build a team to support the growing community.

The technology path is similar to the one VMware followed in its early days when IT managed their corporate-owned infrastructure. These were state-of-the-art data centers that had to be optimized to run enterprise software. For these IT managers, VMware became a critical part of the equation so multiple virtual machines could run on its hypervisor and server environment. VMware is lauded for the excellent job it did in managing its technology so the end-user was not impacted and the IT manager could manage the infrastructure effectively.

The similarity to VMware in its early days and the excitement that Docker has generated made it an attractive investment, said Jerry Chen, a general partner at Greylock who joined the venture capital firm in August. It is Chen’s first investment since joining Greylock.

“One of the things we learned at VMware is be as frictionless as possible,” Chen said in a phone interview today. “Docker has that ability as well.”

Docker also can be scaled from scratch. It can grow to multiple apps or be used on public or private servers, Chen said. And it can be scaled out in seconds, moved anywhere and all done without having to re-configure all over again.

“Docker is the right tech to fit the rapid updates,” Chen said.

Docker faces the challenge of making its technology easy-to-use with features that make it effective for a developer or a DevOps professional. For this new DevOps pro, Docker has to consider the management and orchestration of apps that are continuously updated using the Docker environment. For example, Docker will develop both public and private registries for developers to store their containers. It also plans to build management and orchestration tools that are needed as people and their organizations manage more and more Docker containers.

And then there is the community, which continues to grow at scale. Docker is now one of the world’s fastest-growing, open-source efforts. There have been more than 9,000 stars given to Docker on GitHub as well as more than 1,320 forks. To manage that growing community will take investment that the company will need to manage with product development.

It’s that community that helped Docker gain acceptance with Red Hat, which is integrating it into OpenShift, its PaaS environment. It has also been adopted by Google Compute Engine. eBay, Yandex and a host of other companies are using Docker in production environments.

Docker’s Background

Docker is the result of a pivot led by Solomon Hykes, who originally launched the company as DotCloud in 2009.

Originally designed as a platform as a service (PaaS), Docker showed promise for its flexible capabilities in providing developers with a service that supported multiple programming languages. But the competition from companies like Heroku and VMware’s Cloud Foundry made for a challenging market, further exacerbated by the lack of a widespread market acceptance for the benefits that PaaS providers offered.

But developers did need a way to move their code to cloud services in a lightweight way without the tax of heavy virtual machines that were difficult to move and required a degree of manual integration. The problem stemmed from the virtualization technology itself, which sits below the operating system. It virtualizes the server, not the app. And because of that, the operating system has to move in order to run the app wherever it might be transported. Once delivered, it has to be booted up and configured to run with the database and the rest of the stack that it depends on.

With Docker, the container sits on top of the operating system. The only thing that moves is the code. The developer does not have to boot and config. Instead, the container syncs with the cloud service.

Hykes launched the open-source effort last spring and the acceptance has been almost unprecedented.

“I have never seen a technology take off as quickly as Docker and get the type of broad-based adoption that it is getting,” said Dan Scholnick of Trinity Ventures in a phone interview last week. “If you look at the absolute numbers — the number of Docker containers downloaded, the number of docker containers created — they are off the charts. What is more interesting, the adoption is not just coming from startup or certain types of companies. The adoption is across companies of all sizes and industry verticals. It is a combination of high-growth and broad-based adoption that is really amazing.”

There really are no equivalents to Docker. There are alternatives to it but as a Linux container it is the most widely used in the market. Its deepest competition will stem from VMware and virtualization providers that market to developers. And that’s not it. Cloud Foundry has its own form of a Linux container, which raises a question about how Docker fulfills its promise as a technology platform. The container is one part of the puzzle. It’s the foundation, but there are tool developers who can seize the opportunity to develop technologies that compete with Docker while also participating in its ecosystem.

Utah is Ending Homelessness by Giving People Homes | NationofChange

$
0
0

Comments:"Utah is Ending Homelessness by Giving People Homes | NationofChange"

URL:http://www.nationofchange.org/utah-ending-homelessness-giving-people-homes-1390056183


Earlier this month, Hawaii State representative Tom Bower (D) began walking the streets of his Waikiki district with a sledgehammer, and smashing shopping carts used by homeless people. “Disgusted” by the city’s chronic homelessness problem, Bower decided to take matters into his own hands — literally. He also took to rousing homeless people if he saw them sleeping at bus stops during the day.

Bower’s tactics were over the top, and so unpopular that he quickly declared “Mission accomplished,” and retired his sledgehammer. But Bower’s frustration with his city’s homelessness problem is just an extreme example of the frustration that has led cities to pass measures that effective deal with the homeless by criminalizing homelessness.

  • City council members in Columbia, South Carolina, concerned that the city was becoming a “magnet for homeless people,” passed an ordinance giving the homeless the option to either relocate or get arrested. The council later rescinded the ordinance, after backlash from police officers, city workers, and advocates.
  • Last year, Tampa, Florida — which had the most homeless people for a mid-sized city — passed an  ordinance allowing police officers to arrest anyone they saw sleeping in public, or “storing personal property in public.” The city followed up with a ban on panhandling downtown, and other locations around the city.
  • Philadelphia took a somewhat different approach, with a law banning the feeding of homeless people on city parkland. Religious groups objected to the ban, and announced that they would not obey it.
  • Raleigh, North Carolina took the step of asking religious groups to stop their longstanding practice of feeding the homeless in a downtown park on weekends. Religious leaders announced that they would risk arrest rather than stop.

This trend makes Utah’s accomplishment even more noteworthy. In eight years, Utah has quietly reduced homelessness by 78 percent, and is on track to end homelessness by 2015.

How did Utah accomplish this? Simple. Utah solved homelessness by giving people homes. In 2005, Utah figured out that the annual cost of E.R. visits and jail says for homeless people was about $16,670 per person, compared to $11,000 to provide each homeless person with an apartment and a social worker. So, the state began giving away apartments, with no strings attached. Each participant in Utah’s Housing First program also gets a caseworker to help them become self-sufficient, but the keep the apartment even if they fail. The program has been so successful that other states are hoping to achieve similar results with programs modeled on Utah’s.

It sounds like Utah borrowed a page from Homes Not Handcuffs, the 2009 report by The National Law Center on Homelessness & Poverty and The National Coalition for the Homeless. Using a 2004 survey and anecdotal evidence from activists, the report concluded that permanent housing for the homeless is cheaper than criminalization. Housing is not only more human, it’s economical.

This happened in a Republican state! Republicans in Congress would probably have required the homeless to take a drug test before getting an apartment, denied apartments to homeless people with criminal records, and evicted those who failed to become self-sufficient after five years or so. But Utah’s results show that even conservative states can solve problems like homelessness with decidedly progressive solutions.

With Traction But Out Of Cash, 4chan Founder Kills Off Canvas/DrawQuest | TechCrunch

$
0
0

Comments:"With Traction But Out Of Cash, 4chan Founder Kills Off Canvas/DrawQuest | TechCrunch"

URL:http://techcrunch.com/2014/01/21/when-goods-not-good-enough/


“There’s a lot of glorification of startups and being a founder. People brush the failures under the rug, but that’s the worst thing you can do. You kind of have to face it head on,” says moot aka Christopher Poole. So rather than raise more money for his remix artist community Canvas and game DrawQuest, later today he’ll announce they’re closing. “No soft-landing, no aqui-hire, just ‘shutting down’ shutting down.”

[Update: DrawQuest and Canvas have now published blog posts confirming this article and telling their users what's going on. Moot has also penned his own eulogy for his startup, and will be writing more in the future in hopes of educating other entrepreneurs.

In a touching part of his post-mortem, moot opens up saying "Few in business will know the pain of what it means to fail as a venture-backed CEO. Not only do you fail your employees, your customers, and yourself, but you also fail your investors—partners who helped you bring your idea to life."]

What’s different about this trip to the deadpool is that DrawQuest was actually doing relatively well. Launched a year ago to inspire people to take on daily bouts of creativity through drawing challenges, it reached 1.4 million downloads, 550,000 registered users, 400,000 monthly users, 25,000 daily users, and 8 million drawings.

“We’re doing better than 98% of products out there, especially in the mobile space.” says moot, but he admits that traction is ”Shy of that all important million (monthly users). Where we failed basically was one: to crack our growth engine. But importantly, we were never able to crack the business side of things in time.”

Perhaps if DrawQuest was the plan all along, it could have survived long enough to grow and monetize, but it was on a short fuse. Moot originally raised a $625,000 seed round led by Lerer Ventures in May 2010 to start DrawQuest’s predecesor Canvas, a media-centric forum where people could post, remix, and discuss visual Internet art. Then he raised $3 million more in June 2011 in a Series A led by Union Square Ventures’ Fred Wilson and joined by SV AngelLerer VenturesAndreessen HorowitzFounder Collective, and Joshua Schachter.

It wasn’t until February 2013 that DrawQuest launched, and that tardy pivot left moot lagging far behind where he needed to be. “We built this app with less than half of our runway remaining. You have to do twice as much with half as much time. It’s really freaking hard.” For seed stage companies it might be easier, but proving you’re worth the valuation of a Series B upround requires incredible metrics that are tough to reach if you have audible late in the game. ”People trivialize pivoting but it’s truly a hail mary, and it’s rare that people can pull this off.”

DrawQuest got some traction, but found that selling paint brushes in a drawing app is a lot harder than selling extra lives in Candy Crush. There’s just not the same emotional ‘I can’t play if I don’t pay’ urgency. “I definitely have a new appreciation for game designers,” moot tells me.

With Canvas/DrawQuest’s headcount incurring serious costs, moot searched for someone to acquire his startup. “We approached a few companies and no one was buying what we were selling. [We were] never trying to win any awards with our brushstroke algorithms, so from an IP standpoint [there wasn't much to buy]. The nut that was interesting was the community, but it wasn’t really clear what exactly this community would do for their business.”

After running “Wild West of the Internet” image-sharing site 4chan since 2003, moot was actually looking forward to not being the head honcho for once. “I thought we were doing great work and we could continue to do great work as part of a bigger organization. I had kind of psyched myself up for that, but then…” no deal materialized.

“Ultimately we decided we wouldn’t go try to raise more money – it wasn’t really on the table because we just hadn’t created enough value” says moot. That’s a rare admission of failure in the success theater startup. Most founders trumpet their funding rounds and growth milestones but slink away when things go pear-shaped. Poole’s willingness to be humble and transparent is admirable, and could increase willingness of investors to back his future projects.

So today he’ll announce that Canvas is shutting down in the next few days, and users will get an email with a link to download all their content.

As for DrawQuest, moot says “I’m going to try to keep the servers up as long as I can. As of today all of the company’s employees are going their separate ways…but I’m hoping that between in-app purchases and whatever money is in the bank we’d be able to keep the service alive for a bit longer. We think it makes sense to pay our AWS bill until we’re completely out of money which will hopefully be a few months.” Perhaps even longer as moot dreams that maybe “some white knight comes in and says ‘I want to chip in for the server costs’.”

In DrawQuest’s goodbye post, moot writes “We hope you’ll all continue to spread the importance of daily creativity, and inspire those around you to draw more often. While DrawQuest may not be around next year, you all will be, and we hope you’ll leave the world a better, more creative place.”

And as for moot himself?:

“I’m a free agent for the first time in over 4 years because I was in college when I dropped out to start this comapny. I’m definitely not trying to start another company anytime soon. I need to decompress and refelect on what I’ve learned and take some time to myself because it’s been a bit of an emotional rollecoaster. You start to appreciate why the best investors are the best investors. In our final hour everyone was so supportive. It’s made the difference between me being an emotional wreck and me being in as good of a place emotionally as you can be when you fail.  Most companies fail, and unfortunately we are one of those companies. Those are the odds.”

Female Founders Conference


Debunking Princeton

$
0
0

Comments:"Debunking Princeton"

URL:https://www.facebook.com/notes/mike-develin/debunking-princeton/10151947421191849


Like many of you, we were intrigued by a recent article by Princeton researchers predicting the imminent demise of Facebook. Of particular interest was the innovative use of Google search data to predict engagement trends, instead of studying the actual engagement trends. Using the same robust methodology featured in the paper, we attempted to find out more about this "Princeton University" - and you won't believe what we found!

In keeping with the scientific principle "correlation equals causation," our research unequivocally demonstrated that Princeton may be in danger of disappearing entirely. Looking at page likes on Facebook, we find the following alarming trend:

Now, Facebook isn't the only repository of human knowledge out there. A search of Google Scholar revealing a plethora of scholarly articles of great scholarliness turned up the following results, showing the percentage of articles matching the query "Princeton" by year:

The trend is similarly alarming: since 2009, the percentage of "Princeton" papers in journals has dropped dramatically.

Of course, Princeton University is primarily an institution of higher learning - so as long as it has students, it'll be fine. Unfortunately, in investigating this, we found a strong correlation between the undergraduate enrollment of an institution and its Google Trends index:

Sadly, this spells bad news for this Princeton entity, whose Google Trends search scores have been declining for the last several years:

This trend suggests that Princeton will have only half its current enrollment by 2018, and by 2021 it will have no students at all, agreeing with the previous graph of scholarly scholarliness. Based on our robust scientific analysis, future generations will only be able to imagine this now-rubble institution that once walked this earth.

       

While we are concerned for Princeton University, we are even more concerned about the fate of the planet — Google Trends for "air" have also been declining steadily, and our projections show that by the year 2060 there will be no air left:

As previous researchers [J. Sparks, 2008] have expressed in the past, this will have grievous consequences for the fate of all humanity, not just our academic colleagues in New Jersey.

Although this research has not yet been peer-reviewed, every Like for this post counts as a peer review. Start reviewing!

P.S. We don’t really think Princeton or the world’s air supply is going anywhere soon. We love Princeton (and air). As data scientists, we wanted to give a fun reminder that not all research is created equal – and some methods of analysis lead to pretty crazy conclusions.

Research by Mike Develin, Lada Adamic, and Sean Taylor.

The Techtopus: How Silicon Valley’s most celebrated CEOs conspired to drive down 100,000 tech engineers’ wages | PandoDaily

$
0
0

Comments:"The Techtopus: How Silicon Valley’s most celebrated CEOs conspired to drive down 100,000 tech engineers’ wages | PandoDaily"

URL:http://pando.com/2014/01/23/the-techtopus-how-silicon-valleys-most-celebrated-ceos-conspired-to-drive-down-100000-tech-engineers-wages/


By Mark Ames
On January 23, 2014

In early 2005, as demand for Silicon Valley engineers began booming, Apple’s Steve Jobs sealed a secret and illegal pact with Google’s Eric Schmidt to artificially push their workers wages lower by agreeing not to recruit each other’s employees, sharing wage scale information, and punishing violators. On February 27, 2005, Bill Campbell, a member of Apple’s board of directors and senior advisor to Google, emailed Jobs to confirm that Eric Schmidt “got directly involved and firmly stopped all efforts to recruit anyone from Apple.”

Later that year, Schmidt instructed his Sr VP for Business Operation Shona Brown to keep the pact a secret and only share information “verbally, since I don’t want to create a paper trail over which we can be sued later?”

These secret conversations and agreements between some of the biggest names in Silicon Valley were first exposed in a Department of Justice antitrust investigation launched by the Obama Administration in 2010. That DOJ suit became the basis of a class action lawsuit filed on behalf of over 100,000 tech employees whose wages were artificially lowered — an estimated $9 billion effectively stolen by the high-flying companies from their workers to pad company earnings — in the second half of the 2000s. Last week, the 9th Circuit Court of Appeals denied attempts by Apple, Google, Intel, and Adobe to have the lawsuit tossed, and gave final approval for the class action suit to go forward. A jury trial date has been set for May 27 in San Jose, before US District Court judge Lucy Koh, who presided over the Samsung-Apple patent suit.

In a related but separate investigation and ongoing suit, eBay and its former CEO Meg Whitman, now CEO of HP, are being sued by both the federal government and the state of California for arranging a similar, secret wage-theft agreement with Intuit (and possibly Google as well) during the same period.

The secret wage-theft agreements between Apple, Google, Intel, Adobe, Intuit, and Pixar (now owned by Disney) are described in court papers obtained by PandoDaily as “an overarching conspiracy” in violation of the Sherman Antitrust Act and the Clayton Antitrust Act, and at times it reads like something lifted straight out of the robber baron era that produced those laws. Today’s inequality crisis is America’s worst on record since statistics were first recorded a hundred years ago — the only comparison would be to the era of the railroad tycoons in the late 19th century.

Shortly after sealing the pact with Google, Jobs strong-armed Adobe into joining after he complained to CEO Bruce Chizen that Adobe was recruiting Apple’s employees. Chizen sheepishly responded that he thought only a small class of employees were off-limits:

I thought we agreed not to recruit any senior level employees…. I would propose we keep it that way. Open to discuss. It would be good to agree.

Jobs responded by threatening war:

OK, I’ll tell our recruiters they are free to approach any Adobe employee who is not a Sr. Director or VP. Am I understanding your position correctly?

Adobe’s Chizen immediately backed down:

I’d rather agree NOT to actively solicit any employee from either company…..If you are in agreement, I will let my folks know.

The next day, Chizen let his folks — Adobe’s VP of Human Resources — know that “we are not to solicit ANY Apple employees, and visa versa.” Chizen was worried that if he didn’t agree, Jobs would make Adobe pay:

if I tell Steve [Jobs] it’s open season (other than senior managers), he will deliberately poach Adobe just to prove a point. Knowing Steve, he will go after some of our top Mac talent…and he will do it in a way in which they will be enticed to come (extraordinary packages and Steve wooing).

Indeed Jobs even threatened war against Google early 2005 before their “gentlemen’s agreement,” telling Sergey Brin to back off recruiting Apple’s Safari team:

if you [Brin] hire a single one of these people that means war.

Brin immediately advised Google’s Executive Management Team to halt all recruiting of Apple employees until an agreement was discussed.

In the geopolitics of Silicon Valley tech power, Adobe was no match for a corporate superpower like Apple. Inequality of the sort we’re experiencing today affects everyone in ways we haven’t even thought of — whether it’s Jobs bullying slightly lesser executives into joining an illegal wage-theft pact, or the tens of thousands of workers whose wages were artificially lowered, transferred into higher corporate earnings, and higher compensations for those already richest and most powerful to begin with.

Over the next two years, as the tech industry entered another frothing bubble, the secret wage-theft pact which began with Apple, Google and Pixar expanded to include Intuit and Intel. The secret agreements were based on relationships, and those relationships were forged in Silicon Valley’s incestuous boards of directors, which in the past has been recognized mostly as a problem for shareholders and corporate governance advocates, rather than for the tens of thousands of employees whose wages and lives are viscerally affected by their clubby backroom deals. Intel CEO Paul Otellini joined Google’s board of directors in 2004, a part-time gig that netted Otellini $23 million in 2007, with tens of millions more in Google stock options still in his name — which worked out to $464,000 per Google board event if you only counted the stock options Otellini cashed out — dwarfing what Otellini made off his Intel stock options, despite spending most of his career with the company.

Meanwhile, Eric Schmidt served on Apple’s board of directors until 2009, when a DoJ antitrust investigation pushed him to resign. Intuit’s chairman at the time, Bill Campbell, also served on Apple’s board of directors, and as official advisor — “consigliere” — to Google chief Eric Schmidt, until he resigned from Google in 2010. Campbell, a celebrated figure (“a quasi-religious force for good in Silicon Valley”) played a key behind-the-scenes role connecting the various CEOs into the wage-theft pact. Steve Jobs, who took regular Sunday walks with Campbell near their Palo Alto homes, valued Campbell for his ability “to get A and B work out of people,” gushing that the conduit at the center of the $9 billion wage theft suit, “loves people, and he loves growing people.”

Indeed. Eric Schmidt has been, if anything, even more profuse in his praise of Campbell. Schmidt credits Campbell for structuring Google when Schmidt was brought on board in 2001:

His contribution to Google — it is literally not possible to overstate. He essentially architected the organizational structure.

Court documents show it was Campbell who first brought together Jobs and Schmidt to form the core of the Silicon Valley wage-theft pact. And Campbell’s name appears as the early conduit bringing Intel into the pact with Google:

Bill Campbell (Chairman of Intuit Board of Directors, Co-Lead Director of Apple, and advisor to Google) was also involved in the Google-Intel agreement, as reflected in an email exchange from 2006 in which Bill Campbell agreed with Jonathan Rosenberg (Google Advisor to the Office of CEO and former Senior Vice President of Product Management) that Google should call [Intel CEO] Paul Otellini before making an offer to an Intel employee, regardless of whether the Intel employee first approached Google.

Getting Google on board with the wage-theft pact was the key for Apple from the start — articles in the tech press in 2005 pointed at Google’s recruitment drive and incentives were the key reason why tech wages soared that year, at the highest rate in well over a decade.

Campbell helped bring in Google, Intel, and, in 2006, Campbell saw to it that Intuit — the company he chaired — also joined the pact.

From the peaks of Silicon Valley, Campbell’s interpersonal skills were magical and awe-inspiring, a crucial factor in creating so much unimaginable wealth for their companies and themselves. Jobs said of Campbell:

There is something deeply human about him.

And Schmidt swooned:

He is my closest confidant…because he is the definition of trust.

Things — and people — look very different when you’re down in the Valley. In the nearly 100-page court opinion issued last October by Judge Koh granting class status to the lawsuit, Campbell comes off as anything but mystical and “deeply human.” He comes off as a scheming consigliere carrying out some of the drearier tasks that the oligarchs he served were constitutionally not so capable of arranging without him.

But the realities of inequality and capitalism invariably lead to mysticism of this sort, a natural human response to the dreary realities of concentrating so much wealth and power in the hands of a dozen interlocking board members at the expense of 100,000 employees, and so many other negative knock-off effects on the politics and culture of the world they dominate.

One of the more telling elements to this lawsuit is the role played by “Star Wars” creator George Lucas, who emerges as the Obi-Wan Kenobi of the wage-theft scheme. It’s almost too perfectly symbolic that Lucas — the symbiosis of Baby Boomer New Age mysticism, Left Coast power, political infantilism, and dreary 19th century labor exploitation — should be responsible for dreaming up the wage theft scheme back in the mid-1980s, when Lucas sold the computer animation division of Lucasfilm, Pixar, to Steve Jobs.

As Pixar went independent in 1986, Lucas explained his philosophy about how competition for computer engineers violated his sense of normalcy — and profit margins. According to court documents:

George Lucas believed that companies should not compete against each other for employees, because ‘[i]t’s not normal industrial competitive situation.’ As George Lucas explained, ‘I always — the rule we had, or the rule that I put down for everybody,’ was that ‘we cannot get into a bidding war with other companies because we don’t have the margins for that sort of thing.’

Translated, Lucas’ wage-reduction agreement meant that Lucasfilm and Pixar agreed to a) never cold call each other’s employees; b) notify each other if making an offer to an employee of the other company, even if that employee applied for the job on his or her own without being recruited; c) any offer made would be “final” so as to avoid a costly bidding war that would drive up not just the employee’s salary, but also drive up the pay scale of every other employee in the firm.

Jobs held to this agreement, and used it as the basis two decades later to suppress employee costs just as fierce competition was driving up tech engineers’ wages.

The companies argued that the non-recruitment agreements had nothing to do with driving down wages. But the court ruled that there was “extensive documentary evidence” that the pacts were designed specifically to push down wages, and that they succeeded in doing so. The evidence includes software tools used by the companies to keep tabs on pay scales to ensure that within job “families” or titles, pay remained equitable within a margin of variation, and that as competition and recruitment boiled over in 2005, emails between executives and human resources departments complained about the pressure on wages caused by recruiters cold calling their employees, and bidding wars for key engineers.

Google, like the others, used a “salary algorithm” to ensure salaries remained within a tight band across like jobs. Although tech companies like to claim that talent and hard work are rewarded, in private, Google’s “People Ops” department kept overall compensation essentially equitable by making sure that lower-paid employees who performed well got higher salary increases than higher-paid employees who also performed well.

As Intel’s director of Compensation and Benefits bluntly summed up the Silicon Valley culture’s official cant versus its actual practices,

While we pay lip service to meritocracy, we really believe more in treating everyone the same within broad bands.

The companies in the pact shared their salary data with each other in order to coordinate and keep down wages — something unimaginable had the firms not agreed to not compete for each other’s employees. And they fired their own recruiters on just a phone call from a pact member CEO.

In 2007, when Jobs learned that Google tried recruiting one of Apple’s employees, he forwarded the message to Eric Schmidt with a personal comment attached: “I would be very pleased if your recruiting department would stop doing this.”

Within an hour, Google made a “public example” by “terminating” the recruiter in such a manner as to “(hopefully) prevent future occurrences.”

Likewise, when Intel CEO Paul Otellini heard that Google was recruiting their tech staff, he sent a message to Eric Schmidt: “Eric, can you pls help here???”

The next day, Schmidt wrote back to Otellini: “If we find that a recruiter called into Intel, we will terminate the recruiter.”

One of the reasons why non-recruitment works so well in artificially lowering workers’ wages is that it deprives employees of information about the job market, particularly one as competitive and overheating as Silicon Valley’s in the mid-2000s. As the companies’ own internal documents and statements showed, they generally considered cold-calling recruitment of “passive” talent — workers not necessarily looking for a job until enticed by a recruiter — to be the most important means of hiring the best employees.

Just before joining the wage-theft pact with Apple, Google’s human resources executives are quoted sounding the alarm that they needed to “dramatically increase the engineering hiring rate” and that would require “drain[ing] competitors to accomplish this rate of hiring.” One CEO who noticed Google’s hiring spree was eBay CEO Meg Whitman, who in early 2005 called Eric Schmidt to complain, “Google is the talk of the Valley because [you] are driving up salaries across the board.” Around this time, eBay entered an illegal wage-theft non-solicitation scheme of its own with Bill Campbell’s Intuit, which is still being tried in ongoing federal and California state suits.

Google placed the highest premium on “passive” talent that they cold-called because “passively sourced candidates offer[ed] the highest yield,” according to court documents. The reason is like the old Groucho Marx joke about not wanting to belong to a club that would let you join it — workers actively seeking a new employer were assumed to have something wrong with them; workers who weren’t looking were assumed to be the kind of good happy talented workers that company poachers would want on their team.

For all of the high-minded talk of post-industrial technotopia and Silicon Valley as worker’s paradise, what we see here in stark ugly detail is how the same old world scams and rules are still operative.

Court documents below…

October 24, 2013 Class Cert Order


[Illustration by Brad Jonas for Pando]

Microsoft Investor Relations - Press Releases

$
0
0

Comments:"Microsoft Investor Relations - Press Releases"

URL:http://www.microsoft.com/investor/EarningsAndFinancials/Earnings/PressReleaseAndWebcast/FY14/Q2/default.aspx


Earnings Release FY14 Q2

Related Information

FY14 Earnings Release Schedule

Q3-Thursday, April 24

IMPORTANT NOTICE TO USERS (summary only, click here for full text of notice); All information is unaudited unless otherwise noted or accompanied by an audit opinion and is subject to the more comprehensive information contained in our SEC reports and filings. We do not endorse third-party information. All information speaks as of the last fiscal quarter or year for which we have filed a Form 10-K or 10-Q, or for historical information the date or period expressly indicated in or with such information. We undertake no duty to update the information. Forward-looking statements are subject to risks and uncertainties described in our Forms 10-Q and 10-K.

Article 46

Model Your Users: Algorithms Behind the Minuum Keyboard | The Minuum Keyboard Project

$
0
0

Comments:"Model Your Users: Algorithms Behind the Minuum Keyboard | The Minuum Keyboard Project"

URL:http://minuum.com/model-your-users-algorithms-behind-the-minuum-keyboard/


When you’re creating a new keyboard technology, there’s a ton of work that goes into both the interaction design, and into the algorithms behind the scenes. While the design of our keyboard is best understood simply by using it, the real “magic” that makes our one-dimensional keyboard possible lies in the statistical algorithms that make it tick.

If you haven’t already seen or used the Minuum keyboard, the brief summary is that we let you compress the conventional keyboard down to just one row of keys, opening up the possibility of typing anywhere where you can measure one dimension of input.

 

By shrinking the keyboard in this way we soon had to grapple with a basic fact: human input is imprecise, and the faster you type the more imprecise it gets. Rather than trying to improve user precision, we instead embrace sloppy typing.

This only works because we use disambiguation in addition to auto-correction. While “auto-correction” implies that you made a mistake that needed correcting, “disambiguation” accepts the fundamental ambiguity of human interaction, and uses an understanding of language to narrow things down. Think of it like speech recognition: in a noisy bar, the problem isn’t that your friends are speaking incorrectly; human speech is ambiguous, and the noisiness of the environment sure doesn’t help. You can only understand them because you have prior knowledge of the sorts of things they are likely to say.

Which leads us into the wonderful world of…

Bayesian statistics!

Minuum combines two factors to evaluate a word, a spatial model which understands how precise you are when you tap on the keyboard (we perform user studies to measure this), and a language model which understands what words you’re likely to use (we build this from huge bodies of real-world text). If you tap on the keyboard five times, and those taps somewhat resemble the word “hello”, we use the following Bayesian equation to test how likely it is that you wanted the word “hello”:

Let’s break that equation down: the probability that you wanted the word “hello” given those taps, is proportional to the product of the spatial and language terms. The spatial term gives the likelihood that wanting to type the word “hello” could have led you to input that sequence of taps; the language term gives the probability that you would ever type the word “hello”.

Minuum’s job is to find the word that maximizes p(word|taps). In the example above, Minuum is generating a score for the word “hello”. To find the best word, Minuum would compare this score to the scores for other words, calculated the same way. The closer your taps are to the correct locations for a given word, the greater the spatial term for that word; the more common a word in English (or French, German, Italian or Spanish if you have one of those languages enabled) the greater the language term.

A simple spatial model

Minuum uses a fairly complicated spatial model (remember the spatial model represents how people tend to actually type on the keyboard). This model can handle many kinds of imprecision, such as extra and omitted characters. A simple model that works surprisingly well, however, is to treat the probability density of a tap as a Gaussian centered at the target character.

This shows that if you mean to type a “t”, the most likely point you tap on the keyboard is right on the “t”, but there is still a significant probability that you tap on a nearby location closer to the “v” or the “g”.

A simple language model

The simplest language model is just a count of word frequencies. Take a large body of text (a corpus), and count how many times each word shows up.

Word Frequency if 1,115,786 IV 5335

To compare two potential words, say “if” and “IV”, according to the above table “if” is around 200 times more likely to be typed than “IV”.

This simple model, like the simple spatial model, works quite well in practice. Further improvements can come from using context such as the word immediately before the current entry.

Word(s) Frequency what if 13,207 what of 1,380

The phrase “what if” is about ten times more common than “what of”, so even though “if” and “of” are both very common words, given the context “what”, we can confidently guess that “if” is the intended word.

Words are high-dimensional points

I understand problems best when I can picture them geometrically. My intuitive understanding of the disambiguation problem finally clicked when we had an insight: words are points in high-dimensional space, and typing is a search for those words! Skeptical? Let me explain.
Minuum is a single line, so tapping your finger on Minuum can be represented by one number, In the figure below, for instance, a tap on “q” could clock in between 0 and 0.04, and a tap on “p” at 0.98 to 1.

A continuum of letters from 0.0 from 1.0

A two-letter word, consists of two taps, and so can be represented as a pair of numbers. The word “an”, typed perfectly, is represented as {0.06, 0.67}, and the word “if” as {0.83, 0.40}. The figure belows shows the positions of some common 2-letter words in this “word space”.

The exact same logic applies to longer words: “and” is {0.06, 0.67, 0.29}, “minuum” is {0.79, 0.83, 0.67, 0.71, 0.71, 0.79}. Above three dimensions, unfortunately, it’s much harder to visualize.

A user’s sequence of taps is also a point in this word space, which we can call the input point. The “closer” a word’s point is to the input point, the higher that word will score in the spatial term of the Bayesian equation above. Odds are, whatever you meant to type is “nearby” to what you actually typed in this space.

So let’s visualize some words!

We can generate a full map of the top two-letter words recommended by Minuum, based on any possible pair of input taps; here, more common words tend to end up with larger areas. By hovering over the graph, you can see what other words would be recommended as alternative candidates.

Two-letter predictions with no context

Two-letter word predictions with previous word “what”

Toggle context
Toggle the context button above to see what happens when we use a better language model to account for the user having previously typed the word “what”. Clearly, “if” is more likely and “in” is less likely to be recommended when we account for context, because “what if” is more common than “what of”, while “what in” is less common than “what I’m”.

Of course, Minuum uses more context than just the previous word, and also learns your personal typing tendencies over time, so this picture is different for each user.

Statistical modelling for better interfaces

All this complexity allows Minuum to shed some constraints of conventional keyboards (working even as a one-row keyboard on a 1” screen!)

What does this show? That interfaces are better when they understand the user! Google Instant is awesome because it knows what you’re looking for after a couple keystrokes. Siri would be impossible without complex language modeling. Minuum can simplify keyboards only by combining strong spatial and language models of real human input. If you’re dealing with a complex interface, consider how you can statistically model user behaviour to simplify the interaction required.

The U.S. Crackdown on Hackers Is Our New War on Drugs | Wired Opinion | Wired.com

$
0
0

Comments:"The U.S. Crackdown on Hackers Is Our New War on Drugs | Wired Opinion | Wired.com"

URL:http://www.wired.com/opinion/2014/01/using-computer-drug-war-decade-dangerous-excessive-punishment-consequences/


Before Edward Snowden showed up, 2013 was shaping up as the year of reckoning for the much criticized federal anti-hacking statute, the Computer Fraud and Abuse Act (“CFAA”). The suicide of Aaron Swartz in January 2013 brought the CFAA into mainstream consciousness, so Congress held hearings about the case, and legislative fixes were introduced to change the law.

Recognizing the powerful capabilities of modern computing and networking has resulted in ‘cyber panic’ in legislatures and prosecutor offices across the country.

Finally, there seemed to be a newfound scrutiny of CFAA prosecutions and punishment for accessing computer data without or in excess of “authorization” — which affected everyone from Chelsea Manning to Jeremy Hammond to Andrew “Weev” Auernheimer (disclosure: I’m one of his lawyers on appeal). Not to mention less illustrious personalities and everyday users, such as people who delete cookies from their browsers.

But unfortunately, not much has changed; if anything, the growing recognition of the powerful capabilities of modern computing and networking has resulted in a “cyber panic” in legislatures and prosecutor offices across the country. Instead of reexamination, we’ve seen aggressive charges and excessive punishment.

This cyber panic isn’t just a CFAA problem. In the zeal to crack down on cyberbullying, legislatures have passed overbroad laws criminalizing speech clearly protected by the First Amendment. This comes after one effort to use the CFAA to criminalize cyberbullying — built on the premise that violating a website’s terms of service was unauthorized access, or the equivalent of hacking – was thrown out as unconstitutionally vague.

The panic has even spread to how crime is investigated. To prevent digital contraband from coming into the United States, border officials can now search electronic devices without any suspicion of wrongdoing. To get to illicit files on a seized computer, the government can force you to decrypt your computer and threaten you with jail for noncompliance. To get information about one customer, the FBI can demand a service provider turn over the key that unlocks communications from all of the service’s customers. And let’s not even get started on what the NSA has been up to.

The Problem of Excessive Punishment

There’s no doubt that there are good intentions here: to catch bad guys, keep people safe, and preserve some order in a chaotic and changing world. But this “cyber panic,” particularly with the excessive and aggressive use of the CFAA, comes with a real consequence: locking up people in prison for years.

Take the case of Matthew Keys, a former social media editor at Reuters, charged with violating the CFAA in federal court in Sacramento. He allegedly turned over the username and password of a server belonging to the Tribune Company to members of Anonymous, who made changes to the article of a headline in a Los Angeles Times story online. Among other changes, the headline was changed from “Pressure builds in House to pass tax-cut package” to “Pressure builds in House to elect CHIPPY 1337.” It seems like a clear-cut case of vandalism, a prank that caused some damage but little other harm.

Under California law, physical vandalism – like spray painting graffiti on a building — can be punished as either a misdemeanor or a felony, with probation available for both types of charges. If probation is granted, the longest sentence a defendant can serve as a condition of probation is one year in county jail.

But look at the punishment awaiting Keys. He didn’t get charged with a misdemeanor; he got indicted on three felony charges, for which he faces a harsh prison sentence. No, he won’t get anything close to the 10-year maximum. But a cursory calculation of his potential sentence under the federal sentencing guidelines suggest he’s looking at a sentence between 21 and 27 months — about three years of his life — if he decides to go to trial and loses.

Here are more details on how such sentencing works:

…Federal sentencing is based on two things: the seriousness of a crime and the person’s criminal history. The two factors are plotted on a table, with the y-axis a scale of 1 to 43 “levels” that determines the seriousness of a crime, and the x-axis a scale of I to VI that measures criminal history. At sentencing, the judge must determine both scores, plot them on the table, and determine the sentencing range in months, which the court can follow or disregard at its own discretion. …Someone like Keys, who has no criminal history, is in criminal history category I. The starting point for most CFAA crimes is level 6, which is low on the scale but can quickly increase. …Assuming the allegations in Key’s search warrant are correct, the Tribune company spent $17,650.40 to fix the damage, resulting in an increase of 4 levels for causing more than $10,000 and less than $30,000 in damage. Because Keys is charged with causing damage to a computer, he receives another 4 level increase. And because he likely abused a position of trust, he receives another 2 level increase, for a total offense level of 16 — which has a sentencing range between 21 and 27 months for a person in criminal history category I. (That places Keys in “Zone C” of the Sentencing Table, which means the Guidelines don’t authorize a grant of probation, though the judge could impose probation if she wanted to.)

As a country and a criminal justice system, we’ve been down this road of excessive punishment before: with drugs. Prosecutors and lawmakers need to take a step back and think long and hard about whether we’re going down the same road with their zeal towards computer crimes.

Hanni Fakhoury is a former federal public defender and a current Staff Attorney at the Electronic Frontier Foundation (EFF) who focuses on criminal law, privacy, and free speech litigation and advocacy. Follow him on Twitter @hannifakhoury.

For many years, there was a radical disparity in how federal law treated crack and powder cocaine. A person who possessed 5 grams of crack cocaine could be charged with a felony. But it took 500 grams of powder cocaine to get the same felony punishment. This 100-to-1 ratio was born in the 1980s, when Congress was concerned that crack — predominantly used in urban areas by people of color — was becoming an epidemic and a violent one at that.

This extreme disparity only ensured that a disproportionate amount of people of color ended up in prison. Receiving little rehabilitation while incarcerated and struggling to find work or otherwise reintegrate into society once released, convicts would return to crime, get caught, and be sentenced as a recidivist. That meant a longer jail sentence and the continuation of a destructive cycle.

But over the last few years, there has been significant progress towards narrowing this gap. In 2010, Congress passed — and President Obama signed— legislation that reduced the 100-to-1 ratio down to 18-to-1. Attorney General Eric Holder upped the ante this past summer, announcing a series of broader policy reforms that would work to reduce harsh drug sentences by giving prosecutors flexibility to avoid charging a defendant with crimes that carry mandatory minimum prison sentences. And at the end of last year, President Obama pardoned thirteen people and commuted the sentences of eight prisoners who were sentenced under the old ratio and were therefore serving long sentences for crack cocaine convictions.

These reforms took over 20 years. But as technology marches faster than the slow pace of legal change, we don’t have that kind of time to apply the lessons learned from the failed “war on drugs” experiment to the growing wave of computer crime prosecutions.

And It Doesn’t Even Work

The government’s mindset is that technology and the internet can wreak havoc. Disseminating the login credentials of a powerful media company to vandalize a few websites, for example, has the potential to cause more damage than spray-painting graffiti on a highway sign.

That is undoubtedly true. But will aggressive, excessive punishment really deter others here? This country’s experience with the war on drugs suggests the answer is a resounding no.

We shouldn’t let the government’s fear of computers justify disproportionate punishment.

The problem is pronounced with much of the politically motivated online crime that has splashed the headlines. As a generation of people who grew up plugged in and online realized there is no way to voice their complaints within the mainstream political establishment, they decided to take their protests to the medium they know best. Harsh punishment is only going to reinforce and harden that generation’s pessimism towards the government.

This is not to say that “anything goes” online or that crimes should go unpunished. But we need to question whether locking people up for long periods of time — without addressing the root concerns about concentrated political power, civil liberties abuses, and transparency — will have the effect of deterrence or worse yet, a hardened cynicism that perpetuates the endless cycle of punishment. That’s true of even non-politically motivated cybercrime, or really, all crime … whether it involves a computer or not.

* * *

There may be hope yet.

Recently, 11 members of the “PayPal 14,” a group of individuals affiliated with Anonymous who DDoS’d PayPal in 2010 to protest its refusal to process donations to Wikileaks, pleaded guilty to felony CFAA charges in federal court. But their sentences were put off for one year (rather than receiving tough prison sentences). If the defendants stay out of trouble during that time, the felony convictions will be dropped when they come back to court, and they’ll be sentenced to misdemeanors instead. Most of the defendants will avoid jail time, and will have to pay $5,600 to PayPal in restitution.

But for most of these defendants, the experience of going through a federal criminal prosecution is going to be enough to deter them from doing something similar again. Not to mention the financial penalties and misdemeanor convictions. And for those who aren’t deterred? The punishment will appropriately increase the next time. There’s just no need to excessively punish all wrongdoers.

We shouldn’t let the government’s fear of computers justify disproportionate punishment. The type of graduated punishment in the Paypal 14 case is routine in low-level, physical-world criminal cases brought in state courts throughout the country; it can work with computer crime too.

It’s time for the government to learn from its failed 20th century experiment over-punishing drugs and start making sensible decisions about high-tech punishment in the 21st century. It can’t afford to be behind when it comes to tech, especially as the impacts of “cyber-panic” on users — beyond hackers — are very real.

Why I’m Betting on Julia

$
0
0

Comments:"Why I’m Betting on Julia"

URL:http://www.evanmiller.org/why-im-betting-on-julia.html


By Evan Miller

January 23, 2014

The problem with most programming languages is they're designed by language geeks, who tend to worry about things that I don't much care for. Safety, type systems, homoiconicity, and so forth. I'm sure these things are great, but when I'm messing around with a new project for fun, my two concerns are 1) making it work and 2) making it fast. For me, code is like a car. It's a means to an end. The "expressiveness" of a piece of code is about as important to me as the "expressiveness" of a catalytic converter.

This approach to programming is often (derisively) called cowboy coding. I don't think a cowboy is quite the right image, because a cowboy must take frequent breaks due to the physical limitations of his horse. A better aspirational image is an obsessed scientist who spends weeks in the laboratory and emerges, bleary-eyed, exhausted, and wan, with an ingenious new contraption that possibly causes a fire on first use.

Enough about me. Normally I use one language to make something work, and a second language to make it fast, and a third language to make it scream. This pattern is fairly common. For many programmers, the prototyping language is often Python, Ruby, or R. Once the code works, you rewrite the slow parts in C or C++. If you are truly insane, you then rewrite the inner C loops using assembler, CUDA, or OpenCL.

Unfortunately, there's a big wall in between the prototyping language and C, and another big wall between C and assembler. Besides having to learn three different languages to get the job done, you have to mentally switch between the layers of abstraction. At a more quotidian level, you have to write a significant amount of glue code, and often find yourself switching between different source files, different code editors, and disparate debuggers.

I read about Julia a while back, and thought it sounded cool, but not like something I urgently needed. Julia is a dynamic language with great performance. That's nice, I thought, but I've already invested a lot of time putting a Ferrari engine into my VW Beetle — why would I buy a new car? Besides, nowadays a number of platforms — Java HotSpot, PyPy, and asm.js, to name a few — claim to offer "C performance" from a language other than C.

Only later did I realize what makes Julia different from all the others. Julia breaks down the second wall — the wall between your high-level code and native assembly. Not only can you write code with the performance of C in Julia, you can take a peek behind the curtain of any function into its LLVM Intermediate Representation as well as its generated assembly code — all within the REPL. Check it out.


emiller ~/Code/julia (master) ./julia
 _
 _ _ _(_)_ | A fresh approach to technical computing
 (_) | (_) (_) | Documentation: http://docs.julialang.org
 _ _ _| |_ __ _ | Type "help()" to list help topics
 | | | | | | |/ _` | |
 | | |_| | | | (_| | | Version 0.3.0-prerelease+261 (2013-11-30 12:55 UTC)
 _/ |\__'_|_|_|\__'_| | Commit 97b5983 (0 days old master)
|__/ | x86_64-apple-darwin12.5.0
julia> f(x) = x * x
f (generic function with 1 method)
julia> f(2.0)
4.0
julia> code_llvm(f, (Float64,))
define double @julia_f662(double) {
top:
 %1 = fmul double %0, %0, !dbg !3553
 ret double %1, !dbg !3553
}
julia> code_native(f, (Float64,))
 .section __TEXT,__text,regular,pure_instructions
Filename: none
Source line: 1
 push RBP
 mov RBP, RSP
Source line: 1
 vmulsd XMM0, XMM0, XMM0
 pop RBP
 ret

Bam — you can go from writing a one-line function to inspecting its LLVM-optimized X86 assembler code in about 20 seconds.

So forget the stuff you may have read about Julia's type system, multiple dispatch and homoiconi-whatever. That stuff is cool (I guess), but if you're like me, the real benefit is being able to go from the first prototype all the way to balls-to-the-wall multi-core SIMD performance optimizations without ever leaving the Julia environment.

That, in a nutshell, is why I'm betting on Julia. I hesitate to make the comparison, but it's poised to do for technical computing what Node.js is doing for web development — getting disparate groups of programmers to code in the same language. With Node.js, it was the front-end designers and the back-end developers. With Julia, it's the domain experts and the speed freaks. That is a major accomplishment.

Julia's only drawback at this point is the relative dearth of libraries— but the language makes it unusually easy to interface with existing C libraries. Unlike with native interfaces in other languages, you can call C code without writing a single line of C, and so I anticipate that Julia's libraries will catch up quickly. From personal experience, I was able to access 5K lines of C code using about 150 lines of Julia— and no extra glue code in C.

If you work in a technical group that's in charge of a dizzying mix of Python, C, C++, Fortran, and R code — or if you're just a performance-obsessed gunslinging cowboy shoot-from-the-hip Lone Ranger like me — I encourage you to download Julia and take it for a spin. If you're hesitant to complicate your professional life with Yet Another Programming Language, think of Julia as a tool that will eventually help you reduce the number of languages that your project depends on.

I almost neglected to mention: Julia is actually quite a nice language, even ignoring its excellent performance characteristics. I'm no language aesthete, but learning it entailed remarkably few head-scratching moments. At present Julia is in my top 3 favorite programming languages.

Finally, you'll find an active and supportive Julia community. My favorite part about the community is that it is full of math-and-science types who tend to be very smart and very friendly. That's because Julia was not designed by language geeks — it came from math, science, and engineering MIT students who wanted a fast, practical language to replace C and Fortran. So it's not designed to be beautiful (though it is); it's designed to give you answers quickly. That, for me, is what computing is all about.

(By the way, if you're in the Chicago area, Leah Hanson is hosting a free workshop at the University of Chicago on Feb. 1. Join us!)

Want to learn more from your data? My desktop statistics software Wizard can help you apply statistics and communicate discoveries visually without spending days struggling with pointless command syntax. Check it out!

Back to Evan Miller's home pageFollow on TwitterSubscribe to RSS


Bitcoin Now Accepted at TigerDirect.com!

$
0
0

Comments:"Bitcoin Now Accepted at TigerDirect.com!"

URL:http://www.tigerdirect.com/bitcoin/


Bitcoin miners run specific software on their computers to help collectively solve very large and complex problems. Much like "SETI at Home" or "Folding at Home".

Every transaction that takes place using bitcoins is recorded in a public ledger called the blockchain. Basically, the block chain is a history of all confirmed transactions and a record of how much each bitcoin wallet has. Think of it like an army of accountants constantly writing who has how much and who paid whom in a huge piece of paper for all to verify (the blockchain). The process called bitcoin mining confirms each of these transactions before it is saved into the blockchain. On a few occasions, a transaction is really a reward of several bitcoins. This reward is what gives bitcoin miners the incentive to mine. Because the process of searching for bitcoin takes a lot of effort for computers, it is has come to be called "mining".

So, how do I get started on this mining gig?

Like many things, mining for bitcoins can be quite easy or very hard. Today, we have custom solutions for your needs.

Hardware:

AMD offers the ability to build your own custom mining PC. Optimize your machine to run the way you want on the budget you want.

Start here for AMD hardware.

Butterfly labs provides a complete solution to get started easily. They provide the hardware and software you need to get started mining!

Coming Soon!

Software:

You'll need a wallet to begin collecting and storing your Bitcoins. For info regarding a wallet, read our Get Started section or sign up now for a free wallet.

Once you have your wallet and hardware ready to go, all that's left is getting your own mining application and you're ready to start mining!

The Truths Behind 'Dr. Strangelove' : The New Yorker

$
0
0

Comments:"The Truths Behind 'Dr. Strangelove' : The New Yorker"

URL:http://www.newyorker.com/online/blogs/newsdesk/2014/01/strangelove-for-real.html


This month marks the fiftieth anniversary of Stanley Kubrick’s black comedy about nuclear weapons, “Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb.” Released on January 29, 1964, the film caused a good deal of controversy. Its plot suggested that a mentally deranged American general could order a nuclear attack on the Soviet Union, without consulting the President. One reviewer described the film as “dangerous … an evil thing about an evil thing.” Another compared it to Soviet propaganda. Although “Strangelove” was clearly a farce, with the comedian Peter Sellers playing three roles, it was criticized for being implausible. An expert at the Institute for Strategic Studies called the events in the film “impossible on a dozen counts.” A former Deputy Secretary of Defense dismissed the idea that someone could authorize the use of a nuclear weapon without the President’s approval: “Nothing, in fact, could be further from the truth.” (See a compendium of clips from the film.) When “Fail-Safe”—a Hollywood thriller with a similar plot, directed by Sidney Lumet—opened, later that year, it was criticized in much the same way. “The incidents in ‘Fail-Safe’ are deliberate lies!” General Curtis LeMay, the Air Force chief of staff, said. “Nothing like that could happen.” The first casualty of every war is the truth—and the Cold War was no exception to that dictum. Half a century after Kubrick’s mad general, Jack D. Ripper, launched a nuclear strike on the Soviets to defend the purity of “our precious bodily fluids” from Communist subversion, we now know that American officers did indeed have the ability to start a Third World War on their own. And despite the introduction of rigorous safeguards in the years since then, the risk of an accidental or unauthorized nuclear detonation hasn’t been completely eliminated.

The command and control of nuclear weapons has long been plagued by an “always/never” dilemma. The administrative and technological systems that are necessary to insure that nuclear weapons are always available for use in wartime may be quite different from those necessary to guarantee that such weapons can never be used, without proper authorization, in peacetime. During the nineteen-fifties and sixties, the “always” in American war planning was given far greater precedence than the “never.” Through two terms in office, beginning in 1953, President Dwight D. Eisenhower struggled with this dilemma. He wanted to retain Presidential control of nuclear weapons while defending America and its allies from attack. But, in a crisis, those two goals might prove contradictory, raising all sorts of difficult questions. What if Soviet bombers were en route to the United States but the President somehow couldn’t be reached? What if Soviet tanks were rolling into West Germany but a communications breakdown prevented NATO officers from contacting the White House? What if the President were killed during a surprise attack on Washington, D.C., along with the rest of the nation’s civilian leadership? Who would order a nuclear retaliation then?

With great reluctance, Eisenhower agreed to let American officers use their nuclear weapons, in an emergency, if there were no time or no means to contact the President. Air Force pilots were allowed to fire their nuclear anti-aircraft rockets to shoot down Soviet bombers heading toward the United States. And about half a dozen high-level American commanders were allowed to use far more powerful nuclear weapons, without contacting the White House first, when their forces were under attack and “the urgency of time and circumstances clearly does not permit a specific decision by the President, or other person empowered to act in his stead.” Eisenhower worried that providing that sort of authorization in advance could make it possible for someone to do “something foolish down the chain of command” and start an all-out nuclear war. But the alternative—allowing an attack on the United States to go unanswered or NATO forces to be overrun—seemed a lot worse. Aware that his decision might create public unease about who really controlled America’s nuclear arsenal, Eisenhower insisted that his delegation of Presidential authority be kept secret. At a meeting with the Joint Chiefs of Staff, he confessed to being “very fearful of having written papers on this matter.”

President John F. Kennedy was surprised to learn, just a few weeks after taking office, about this secret delegation of power. “A subordinate commander faced with a substantial military action,” Kennedy was told in a top-secret memo, “could start the thermonuclear holocaust on his own initiative if he could not reach you.” Kennedy and his national-security advisers were shocked not only by the wide latitude given to American officers but also by the loose custody of the roughly three thousand American nuclear weapons stored in Europe. Few of the weapons had locks on them. Anyone who got hold of them could detonate them. And there was little to prevent NATO officers from Turkey, Holland, Italy, Great Britain, and Germany from using them without the approval of the United States.

In December, 1960, fifteen members of Congress serving on the Joint Committee on Atomic Energy had toured NATO bases to investigate how American nuclear weapons were being deployed. They found that the weapons—some of them about a hundred times more powerful than the bomb that destroyed Hiroshima—were routinely guarded, transported, and handled by foreign military personnel. American control of the weapons was practically nonexistent. Harold Agnew, a Los Alamos physicist who accompanied the group, was especially concerned to see German pilots sitting in German planes that were decorated with Iron Crosses—and carrying American atomic bombs. Agnew, in his own words, “nearly wet his pants” when he realized that a lone American sentry with a rifle was all that prevented someone from taking off in one of those planes and bombing the Soviet Union.

* * *

The Kennedy Administration soon decided to put locking devices inside NATO’s nuclear weapons. The coded electromechanical switches, known as “permissive action links” (PALs), would be placed on the arming lines. The weapons would be inoperable without the proper code—and that code would be shared with NATO allies only when the White House was prepared to fight the Soviets. The American military didn’t like the idea of these coded switches, fearing that mechanical devices installed to improve weapon safety would diminish weapon reliability. A top-secret State Department memo summarized the view of the Joint Chiefs of Staff in 1961: “all is well with the atomic stockpile program and there is no need for any changes.”

After a crash program to develop the new control technology, during the mid-nineteen-sixties, permissive action links were finally placed inside most of the nuclear weapons deployed by NATO forces. But Kennedy’s directive applied only to the NATO arsenal. For years, the Air Force and the Navy blocked attempts to add coded switches to the weapons solely in their custody. During a national emergency, they argued, the consequences of not receiving the proper code from the White House might be disastrous. And locked weapons might play into the hands of Communist saboteurs. “The very existence of the lock capability,” a top Air Force general claimed, “would create a fail-disable potential for knowledgeable agents to ‘dud’ the entire Minuteman [missile] force.” The Joint Chiefs thought that strict military discipline was the best safeguard against an unauthorized nuclear strike. A two-man rule was instituted to make it more difficult for someone to use a nuclear weapon without permission. And a new screening program, the Human Reliability Program, was created to stop people with emotional, psychological, and substance-abuse problems from gaining access to nuclear weapons.

Despite public assurances that everything was fully under control, in the winter of 1964, while “Dr. Strangelove” was playing in theatres and being condemned as Soviet propaganda, there was nothing to prevent an American bomber crew or missile launch crew from using their weapons against the Soviets. Kubrick had researched the subject for years, consulted experts, and worked closely with a former R.A.F. pilot, Peter George, on the screenplay of the film. George’s novel about the risk of accidental nuclear war, “Red Alert,” was the source for most of “Strangelove” ’s plot. Unbeknownst to both Kubrick and George, a top official at the Department of Defense had already sent a copy of “Red Alert” to every member of the Pentagon’s Scientific Advisory Committee for Ballistic Missiles. At the Pentagon, the book was taken seriously as a cautionary tale about what might go wrong. Even Secretary of Defense Robert S. McNamara privately worried that an accident, a mistake, or a rogue American officer could start a nuclear war.

Coded switches to prevent the unauthorized use of nuclear weapons were finally added to the control systems of American missiles and bombers in the early nineteen-seventies. The Air Force was not pleased, and considered the new security measures to be an insult, a lack of confidence in its personnel. Although the Air Force now denies this claim, according to more than one source I contacted, the code necessary to launch a missile was set to be the same at every Minuteman site: 00000000.

* * *

The early permissive action links were rudimentary. Placed in NATO weapons during the nineteen-sixties and known as Category A PALs, the switches relied on a split four-digit code, with ten thousand possible combinations. If the United States went to war, two people would be necessary to unlock a nuclear weapon, each of them provided with half the code. Category A PALs were useful mainly to delay unauthorized use, to buy time after a weapon had been taken or to thwart an individual psychotic hoping to cause a large explosion. A skilled technician could open a stolen weapon and unlock it within a few hours. Today’s Category D PALs, installed in the Air Force’s hydrogen bombs, are more sophisticated. They require a six-digit code, with a million possible combinations, and have a limited-try feature that disables a weapon when the wrong code is repeatedly entered.

The Air Force’s land-based Minuteman III missiles and the Navy’s submarine-based Trident II missiles now require an eight-digit code—which is no longer 00000000—in order to be launched. The Minuteman crews receive the code via underground cables or an aboveground radio antenna. Sending the launch code to submarines deep underwater presents a greater challenge. Trident submarines contain two safes. One holds the keys necessary to launch a missile; the other holds the combination to the safe with the keys; and the combination to the safe holding the combination must be transmitted to the sub by very-low-frequency or extremely-low-frequency radio. In a pinch, if Washington, D.C., has been destroyed and the launch code doesn’t arrive, the sub’s crew can open the safes with a blowtorch.

The security measures now used to control America’s nuclear weapons are a vast improvement over those of 1964. But, like all human endeavors, they are inherently flawed. The Department of Defense’s Personnel Reliability Program is supposed to keep people with serious emotional or psychological issues away from nuclear weapons—and yet two of the nation’s top nuclear commanders were recently removed from their posts. Neither appears to be the sort of calm, stable person you want with a finger on the button. In fact, their misbehavior seems straight out of “Strangelove.”

Vice Admiral Tim Giardina, the second-highest-ranking officer at the U.S. Strategic Command—the organization responsible for all of America’s nuclear forces—-was investigated last summer for allegedly using counterfeit gambling chips at the Horseshoe Casino in Council Bluffs, Iowa. According to the Iowa Division of Criminal Investigation, “a significant monetary amount” of counterfeit chips was involved. Giardina was relieved of his command on October 3, 2013. A few days later, Major General Michael Carey, the Air Force commander in charge of America’s intercontinental ballistic missiles, was fired for conduct “unbecoming an officer and a gentleman.” According to a report by the Inspector General of the Air Force, Carey had consumed too much alcohol during an official trip to Russia, behaved rudely toward Russian officers, spent time with “suspect” young foreign women in Moscow, loudly discussed sensitive information in a public hotel lounge there, and drunkenly pleaded to get onstage and sing with a Beatles cover band at La Cantina, a Mexican restaurant near Red Square. Despite his requests, the band wouldn’t let Carey onstage to sing or to play the guitar.

While drinking beer in the executive lounge at Moscow’s Marriott Aurora during that visit, General Carey made an admission with serious public-policy implications. He off-handedly told a delegation of U.S. national-security officials that his missile-launch officers have the “worst morale in the Air Force.” Recent events suggest that may be true. In the spring of 2013, nineteen launch officers at Minot Air Force base in North Dakota were decertified for violating safety rules and poor discipline. In August, 2013, the entire missile wing at Malmstrom Air Force base in Montana failed its safety inspection. Last week, the Air Force revealed that thirty-four launch officers at Malmstrom had been decertified for cheating on proficiency exams—and that at least three launch officers are being investigated for illegal drug use. The findings of a report by the RAND Corporation, leaked to the A.P., were equally disturbing. The study found that the rates of spousal abuse and court martials among Air Force personnel with nuclear responsibilities are much higher than those among people with other jobs in the Air Force. “We don’t care if things go properly,” a launch officer told RAND. “We just don’t want to get in trouble.”

The most unlikely and absurd plot element in “Strangelove” is the existence of a Soviet “Doomsday Machine.” The device would trigger itself, automatically, if the Soviet Union were attacked with nuclear weapons. It was meant to be the ultimate deterrent, a threat to destroy the world in order to prevent an American nuclear strike. But the failure of the Soviets to tell the United States about the contraption defeats its purpose and, at the end of the film, inadvertently causes a nuclear Armageddon. “The whole point of the Doomsday Machine is lost,” Dr. Strangelove, the President’s science adviser, explains to the Soviet Ambassador, “if you keep it a secret!”

A decade after the release of “Strangelove,” the Soviet Union began work on the Perimeter system—-a network of sensors and computers that could allow junior military officials to launch missiles without oversight from the Soviet leadership. Perhaps nobody at the Kremlin had seen the film. Completed in 1985, the system was known as the Dead Hand. Once it was activated, Perimeter would order the launch of long-range missiles at the United States if it detected nuclear detonations on Soviet soil and Soviet leaders couldn’t be reached. Like the Doomsday Machine in “Strangelove,” Perimeter was kept secret from the United States; its existence was not revealed until years after the Cold War ended.

In retrospect, Kubrick’s black comedy provided a far more accurate description of the dangers inherent in nuclear command-and-control systems than the ones that the American people got from the White House, the Pentagon, and the mainstream media.

“This is absolute madness, Ambassador,” President Merkin Muffley says in the film, after being told about the Soviets’ automated retaliatory system. “Why should you build such a thing?” Fifty years later, that question remains unanswered, and “Strangelove” seems all the more brilliant, bleak, and terrifyingly on the mark.

You can read Eric Schlosser’s guide to the long-secret documents that help explain the risks America took with its nuclear arsenal, and watch and read his deconstruction of clips from “Dr. Strangelove” and from a little-seen film about permissive action links.

Eric Schlosser is the author of “Command and Control.”

If The Immunity Project Crowdfunds This Synthetic AIDS Vaccine, They'll Offer It Free To Everyone | Fast Company | Business + Innovation

$
0
0

Comments:"If The Immunity Project Crowdfunds This Synthetic AIDS Vaccine, They'll Offer It Free To Everyone | Fast Company | Business + Innovation"

URL:http://www.fastcompany.com/3025372/if-the-immunity-project-crowdfunds-this-synthetic-aids-vaccine-theyll-offer-it-free-to-every


What happens when you combine Microsoft e-Science machine learning, Harvard thinking, and a new medical device to tackle HIV-AIDS? The Immunity Project, a not-for-profit company developing the first ever synthetic HIV vaccine.

The Immunity project’s work is based on the discovery that there are people born with a natural immunity to HIV. After identifying these "HIV controllers" in the population, the team applied machine learning to reverse-engineer the biological processes HIV controllers use to defeat the virus, mimicking natural immunity.

They’ve developed a vaccine prototype and completed preliminary laboratory testing. And today, they went live with a crowdfunding campaign to support a demonstration aimed to prove the vaccine can successfully immunize human blood. It's the last step before they begin Phase 1 human clinical trials with the FDA. Their goal is to give the vaccine away to the world, for free.

In order to complete this experiment by the end of March of this year, they need to raise $482,000 in the next 20 days. If successful, this will help solve a global problem that is still epidemic. AIDS kills nearly 5,000 people a day. While there are several contenders in the race to create a successful HIV vaccine, this one has an excellent shot at working. It's also safer for candidates than vaccines made with killed viruses or live viruses. It requires no refrigeration and is designed to be delivered via nasal inhaler, solving distribution challenges in the countries with the highest HIV infection rates.

Dr. Reid Rubsamen alongside other Immunity Project team members.

The vaccine was originally developed in a partnership between Dr. Bruce Walker from Harvard, Dr. David Heckerman, inventor of the spam filter and AAAI fellow and machine learning/artificial intelligence scientist at Microsoft e-Science Research, and Dr. Reid Rubsamen, drug delivery system expert and founder of Flow Pharma. The project was billed as a great example of multi-disciplinary innovation. Apparently, Silicon Valley accelerator Y Combinator agreed--on January 6, Immunity Project became part of the Winter 2014 Y Combinator class. According to partner Sam Altman, "This is certainly a new sort of company for us, but it's the kind of crazy idea we like.”

“Imagine a world where vaccines are developed for a tiny fraction of the big pharma cost and given away for free to everyone who needs them,” says Altman. “We thought that work done by Microsoft Research that underlies this was really interesting, and we're always interested in areas where software can change how things are done. Technology means doing more with less; this is an extreme example. I spent a fair amount of time with this group during their application process and am personally donating both money and blood."

Immunity Project

$
0
0

Comments:"Immunity Project"

URL:https://pledge.immunityproject.org/the-free-hiv-aids-vaccine


What is Immunity Project?

 

Immunity Project is a Y Combinator-backed non profit organization.  We are proud to be partners with Until There’s A Cure, a registered 501(c)3 organization.

 

This campaign will fund our final experiment, using human blood, before we begin our Phase I Clinical Trials.

 

 

 

"This is certainly a new sort of company for us, but it's the kind of crazy idea we like... I spent a fair amount of time with this group during their application process, and am personally donating both money and blood." --Sam Altman, Partner, Y Combinator

 

 

 

 

 

 

 

What are we doing?

 

Like the best comic book heroes, controllers are born with an incredibly rare super power. They won the genetic lottery. Although controllers carry low levels of HIV, the virus is in a dormant state and they do not contract AIDS. Only 1 out of every 300 people who are living with HIV has this incredible power.

                    

The essence of controllers’ immunity is the unique targeting capability contained within their immune systems. Like the finely tuned laser scope on a sniper rifle, the immune systems of controllers have the ability to target the biological markers on the HIV virus that are its achilles heel. When a controller’s immune system attacks these biological markers it forces the virus into a dormant state. Non controllers have sniper rifles, but they are missing this critical targeting ability.

                    

Immunity Project is a team of Stanford, Harvard, and MIT scientists and entrepreneurs based in the San Francisco Bay Area who are developing a revolutionary vaccine platform using an entirely novel approach: to adopt the unique targeting capability inherent in controllers to give everyone that same immunity to the targeted disease. The first vaccine being developed using this platform is a vaccine for HIV. It is designed to turn everyone who receives it into an HIV controller.  Immunity Project will offer our HIV vaccine to the world for free.

 

 

Why HIV/AIDS?

 

Over 35 million people are currently living with HIV. Each day an additional 7,000 become infected with the virus. Each day over 4,000 people die from AIDS. - the equivalent to ten 747s falling out of the sky every single day. HIV has taken nearly 30 million lives since 1983.

 

Current responses to the pandemic are insufficient to match the challenge posed by HIV. For example, for every person who gains access to antiretroviral drugs today, two are newly infected by the virus. This is especially true in sub-Saharan Africa where the need for an HIV vaccine is of the utmost urgency.

 

 

 

Timeline

 

 

What is this campaign funding?

We are raising $482,000 to fund the final experiment before we begin our Phase I clinical study. Positive data from this experiment will help us provide further validation.  These experiments will help us show that we can successfully immunize human blood against HIV in a controlled external environment.

 

Experiment Outline

Vaccinate humanized mice with (i) an Immunity Project HIV epitope (treatment group) or (ii) tetanus epitope  (control group) (Transgenic NOG grafted with an Immunity Project-relevant HLA type).  Harvest spleens 14 days post immunization, confirm presentation to killer T cells via Flurospot.  Create in-vitro cell culture prep with separated CD4 and CD8 T-Cells wherein the CD4 cells have been inoculated with live HIV virus.  Expected result: p24 HIV core antigen lower and CD4 counts greater with HIV-epitope immunized mice.

 

Budget

For the Mouse Experiment

Animals = $40K

Animal handling (dosing, housing, spleen harvest) = $150K

Reagents (including HIV virus, media, antibodies, magnetic sep, 

p24 assay, MPLA, CpG, etc) = $100K

FLUOROSPOT plates = $55K

 

For the Lab

40 HLA Type determinations = $30K

HLA Subtyping determinations = $20K

Flow Cytometer = $60K

Larger capacity Clinical centrifuge = $6K

Larger capacity CO2 incubator = $6K

Biosafety cabinet = $8K

Lab Refrigerator = $2K

Lab Freezer = $2K

Rent = $3K

 

 

 

Media Coverage

 

"The Y Combinator-backed project discovered how to mimic natural immunity to HIV" - Fast Company

 

"But Y Combinator is now doing something it has never done before–backing a young pharmaceutical company, one that is working on a vaccine for HIV." - The Wall Street Journal

 

"A vaccine for HIV/AIDS has been the holy grail of the medical community for decades, and these guys may have found it." - Venture Beat

 

"A vaccine for HIV/AIDS has been the holy grail of the medical community for decades, and these guys may have found it." - The Verge

 

"Heckerman... made a splash recently with a software advance that eases large-scale searches within genetic databases" - Science

 

"...an effective vaccine... that strengthen a patient's immune system, as opposed to just attacking the virus with drugs" - Scientific American

 

"...the key to fighting spam and HIV is the same: Find the part that absolutely can't mutate -- what he calls the Achilles' heel -- and attack there" - LATimes

 

 

 

Meet the Team

 

Dr. Reid Rubsamen - Chief Executive Officer and Co-Founder

Stanford MD and MA in Computer Science. 60+ patents for novel drug delivery technologies. Founder of Aradigm.

 

Naveen Jain - Chief Marketing Officer and Co-Founder

Entrepreneur and CEO of Sparkart.

 

Dr. Charles Herst - Chief Science Officer

UC Berkeley MA in Bacteriology and Northwestern PhD in Tumor Cell Biology.

 

Dr. Salim Abdool Karim - Clinical Investigator

MD and PhD. Professor of Clinical Epedmiology at Columbia. Director of CAPRISA.

 

Ian Cinnamon - Director of Strategy

BS from MIT in Cognitive Science. Entrepreneur, Author.

 

Howie Diamond - Director of Strategy & Marketing

Entrepreneur and Director of Business Development at Sparkart

 

4 HTTP Security headers you should always be using | ibuildings

$
0
0

Comments:"4 HTTP Security headers you should always be using | ibuildings"

URL:http://ibuildings.nl/blog/2013/03/4-http-security-headers-you-should-always-be-using


What started as a dream for a worldwide library of sorts, has transformed into not only a global repository of knowledge but also the most popular and widely deployed Application Platform: the World Wide Web.
The poster child for Agile, it was not developed as a whole by a single entity, but rather grew as servers and clients expanded it's capabilities. Standards grew along with them.

While growing a solution works very well for discovering what works and what doesn't, it hardly leads to a consistent and easy to apply programming model. This is especially true for security: where ideally the simplest thing that works is also the most secure, it is far too easy to introduce vulnerabilities like XSSCSRF or Clickjacking.

Because HTTP is an extensible protocol browsers have pioneered some useful headers to prevent or increase the difficulty of exploiting these vulnerabilities. Knowing what they are and when to apply them can help you increase the security of your system.  

 

1. Content-Security-Policy

What's so good about it?

How would you like to be largely invulnerable to XSS? No matter if someone managed to trick your server into writing <script>alert(1);</script>, have the browser straight up refuse it?

That's the promise of Content-Security-Policy. 

Adding the Content-Security-Policy header with the appropriate value allows you to restrict the origin of the following:

  • script-src: JavaScript code (biggest reason to use this header)
  • connect-src: XMLHttpRequest, WebSockets, and EventSource.
  • font-src: fonts
  • frame-src: frame ulrs
  • img-src: images
  • media-src: audio & video
  • object-src: Flash (and other plugins)
  • style-src: CSS

So specifying the following:

Content-Security-Policy: script-src 'self' https://apis.google.com

Means that script files may only come from the current domain or from apis.google.com (the Google JavaScript CDN).

Another helpful feature is that you can automatically enable sandbox mode for all iframes on your site.

And if you want to test the waters, you can use use the 'Content-Security-Policy-Report-Only' header to do a dry run of your policy and have the browser post the results to a URL of your choosing.

It is definitely worth the time to read the excellent HTML5Rocks introduction.

 

What's the catch

Unfortunately Internet Explorer (IE) only supports the sandbox mode, and with a 'X-' prefix no less. Also Android support is pretty new (4.4).

And of course it can't protect against all XSS, if you generate your JavaScript dynamically (a bad idea, but not uncommon in practice) someone may still trick your server into generating bad JS.

But, including it does no harm and will protect users on Chrome, Firefox and iOS.
 

Where does it work?

 

Where do I learn more about it?

HTML5Rocks has an awesome introduction. Other than that, the W3C spec is quite readable.

 

2. X-Frame-Options

What's so good about it?

Stop Clickjacking with one simple header:

X-Frame-Options: DENY

This will cause browsers to refuse requests for framing in that page.

Supplying the value 'SAMEORIGIN' will allow framing only from the same origin and 'ALLOW FROM http://url-here.example.com' will allow you to specify an origin (unsupported by IE).

 

What's the catch?

This header will be deprecated and it's functionality will be moved to Content-Security-Policy 1.1 (which does not have the same level of support yet).
But until that has wider support, there is no reason not to use this header.

 

Where does it work?

IE Firefox Chrome iOS Safari Android Browser 8+ 3.6.9+ 4.1.249+ ? ?

(data from Mozilla Developer Network)

 

Where do I learn more about it?

Not much more to learn, but if you want some more information you can check out the Mozilla Developer Network article on the topic.

Also Coding Horror has an old (2009) but good article on Clickjacking / framing: We done been framed.

 

3. X-Content-Type-Options

What's so good about it?

Letting your users upload files is inherently dangerous, serving up files uploaded by users is even more dangerous and difficult to get right.  

This isn't made any easier by browsers second-guessing the Content-Type of what you're serving by doing Mime Sniffing.

The X-Content-Type-Options allows you to, in effect, say to browsers that yes, you know what you're doing, the Content-Type is correct with it's only allowed value: 'nosniff'.

GitHub uses it, you can too.

 

What's the catch?

Only works for IE and Chrome, though depending on your audience that could be 65% of your visitors that you're protecting.

 

Where does it work?

IE Firefox Chrome iOS Safari Android Browser 8+ - (bug 471020) 1+ - -

 

Where do I learn more about it?

FOX IT has an excellent article on MIME sniffing: MIME Sniffing: feature or vulnerability? and the IT Security Stackexchange has a dedicated question on this topic: Does X-Content-Type-Options really prevent content sniffing attacks?

 

4. Strict-Transport-Security

What's so good about it?

My online banking system uses HTTPS, providing authenticity (that yes, I really am connecting to my bank) and transport security (anybody snooping in would only see the encrypted traffic).

However, there is a problem with it...
When I type "onlinebanking.example.com" into the address bar of my browser, it will connect to plain old HTTP by default. It's only if the server then redirects the user to HTTPS (which is a bad idea in theory, but a good one in practice) that I get my secure connection. Unfortunately this redirecting gives an attacker a window to play man-in-the-middle. To solve this the Strict-Transport-Security header was added.

The HTTP Strict-Transport-Security (HSTS) header instructs the browser to (for a given time) only use https. If for instance, you go to https://hsts.example.com and (among others) it returns the following header:

Strict-Transport-Security: max-age=31536000; includeSubDomains

Then even typing in http://hsts.example.com will make the browser connect to https://hsts.example.com.

It will do this for as long as the HSTS header is valid, which in the case of the example is 1 year since the last response that sent the HSTS header. So if I visit the site once on January 1st 2013, it will be valid until January 1st 2014. But if I visit again on December 31st 2013 it will not only still be valid, it will reset the validity to be valid until December 31st 2014.

 

What's the catch?

It only works on Chrome and Firefox for now. Your Internet Explorer users are still vulnerable. Never the less, it's worth implementing as it's an official IETF standard and the next IE should implement it real soon now...

Also you don't want to implement this unless you're using HTTPS, but why wouldn't you be using HTTPS? Remember that HTTPS not only guarantees that your content (and the users content) will be encrypted and therefore uninterceptable, it also provides authenticity. Promising your users that yes, this content really came from you.

Why you should always use HTTPS is a different discussion and as evindenced by the fact that both that blog post and this one are not on HTTPS, still an uphill battle. But if you're using HTTPS you should probably use HSTS too.

 

Where does it work?

Where do I learn more about it?

The Mozilla Developer Network has a good article on it: HTTP Strict Transport Security.  

 

If you're doing Symfony2 or Drupal

For Symfony2 take a look at the NelmioSecurityBundle and for Drupal check out the Security Kit module which allow you to specify all the aforementioned headers!

 

Hall of Shame: X-Requested-With

By default jQuery sends the X-Requested-With header. It was thought that the mere presence of this header could be used as a way to defeat Cross-Site Request Forgery. Surely no request with this header and a users session could be initiated by a third party as in a browser only XMLHttpRequest is allowed to set custom headers.

Unfortunately as the Ruby On Rails Ruby framework and the Django Python framework soon found out, while this is a good measure for defence in depth, it can not be fully relied on in the face of other third party plugins like Java or Adobe Flash.

 

Conclusion

Using the HTTP headers discussed above allows you to quickly and easily protect your users from XSS, Clickjacking, Mime sniffing vulnerabilities and Man-In-The-Middle attacks.

If you aren't using these headers yet, now might be a good time to introduce them to your application or webserver configuration.

Keep your users safe out there.

Viewing all 5394 articles
Browse latest View live