Quantcast
Channel: Hacker News 100
Viewing all 5394 articles
Browse latest View live

Announcing the Female Founders Conference - Y Combinator Posthaven

$
0
0

Comments:" Announcing the Female Founders Conference - Y Combinator Posthaven "

URL:http://blog.ycombinator.com/announcing-the-female-founders-conference


Jessica Livingston

I'm delighted to announce that Kat Manalac, Kirsty Nathoo, Carolynn Levy and I are hosting Y Combinator's first Female Founders Conference on Saturday, March 1. We're going to gather together female founders at all stages to share stories, give advice, and make connections.

The original idea was to make this an event where female YC alumni shared their experiences.  But once we started planning the event we thought it would be exciting to invite Julia Hartz and Diane Greene to speak as well, so that we'd have speakers who could talk about what happens at even later stages.

As well as the speakers, many female YC alumni will be attending the event, so this will be an opportunity to get to know them and ask questions.

The best source of information about startups is the stories of people who've started them.  Our goal with this conference is to inspire women to start (or hang in there with!) a startup through the insights and experiences of those who have done it already.  If you're a woman interested in learning more about startups, I encourage you to apply.


Why Bitcoin Matters - NYTimes.com

$
0
0

Comments:"Why Bitcoin Matters - NYTimes.com "

URL:http://mobile.nytimes.com/blogs/dealbook/2014/01/21/why-bitcoin-matters/


Editor’s note: Marc Andreessen’s venture capital firm, Andreessen Horowitz, has invested just under $50 million in Bitcoin-related start-ups. The firm is actively searching for more Bitcoin-based investment opportunities. He does not personally own more than a de minimis amount of Bitcoin.

A mysterious new technology emerges, seemingly out of nowhere, but actually the result of two decades of intense research and development by nearly anonymous researchers.

Political idealists project visions of liberation and revolution onto it; establishment elites heap contempt and scorn on it.

On the other hand, technologists – nerds – are transfixed by it. They see within it enormous potential and spend their nights and weekends tinkering with it.

Eventually mainstream products, companies and industries emerge to commercialize it; its effects become profound; and later, many people wonder why its powerful promise wasn’t more obvious from the start.

What technology am I talking about? Personal computers in 1975, the Internet in 1993, and – I believe – Bitcoin in 2014.

One can hardly accuse Bitcoin of being an uncovered topic, yet the gulf between what the press and many regular people believe Bitcoin is, and what a growing critical mass of technologists believe Bitcoin is, remains enormous. In this post, I will explain why Bitcoin has so many Silicon Valley programmers and entrepreneurs all lathered up, and what I think Bitcoin’s future potential is.

First, Bitcoin at its most fundamental level is a breakthrough in computer science – one that builds on 20 years of research into cryptographic currency, and 40 years of research in cryptography, by thousands of researchers around the world.

Bitcoin is the first practical solution to a longstanding problem in computer science called the Byzantine Generals Problem. To quote from the original paper defining the B.G.P.: “[Imagine] a group of generals of the Byzantine army camped with their troops around an enemy city. Communicating only by messenger, the generals must agree upon a common battle plan. However, one or more of them may be traitors who will try to confuse the others. The problem is to find an algorithm to ensure that the loyal generals will reach agreement.”

More generally, the B.G.P. poses the question of how to establish trust between otherwise unrelated parties over an untrusted network like the Internet.

The practical consequence of solving this problem is that Bitcoin gives us, for the first time, a way for one Internet user to transfer a unique piece of digital property to another Internet user, such that the transfer is guaranteed to be safe and secure, everyone knows that the transfer has taken place, and nobody can challenge the legitimacy of the transfer. The consequences of this breakthrough are hard to overstate.

What kinds of digital property might be transferred in this way? Think about digital signatures, digital contracts, digital keys (to physical locks, or to online lockers), digital ownership of physical assets such as cars and houses, digital stocks and bonds … and digital money.

All these are exchanged through a distributed network of trust that does not require or rely upon a central intermediary like a bank or broker. And all in a way where only the owner of an asset can send it, only the intended recipient can receive it, the asset can only exist in one place at a time, and everyone can validate transactions and ownership of all assets anytime they want.

How does this work?

Bitcoin is an Internet-wide distributed ledger. You buy into the ledger by purchasing one of a fixed number of slots, either with cash or by selling a product and service for Bitcoin. You sell out of the ledger by trading your Bitcoin to someone else who wants to buy into the ledger. Anyone in the world can buy into or sell out of the ledger any time they want – with no approval needed, and with no or very low fees. The Bitcoin “coins” themselves are simply slots in the ledger, analogous in some ways to seats on a stock exchange, except much more broadly applicable to real world transactions.

The Bitcoin ledger is a new kind of payment system. Anyone in the world can pay anyone else in the world any amount of value of Bitcoin by simply transferring ownership of the corresponding slot in the ledger. Put value in, transfer it, the recipient gets value out, no authorization required, and in many cases, no fees.

That last part is enormously important. Bitcoin is the first Internetwide payment system where transactions either happen with no fees or very low fees (down to fractions of pennies). Existing payment systems charge fees of about 2 to 3 percent – and that’s in the developed world. In lots of other places, there either are no modern payment systems or the rates are significantly higher. We’ll come back to that.

Bitcoin is a digital bearer instrument. It is a way to exchange money or assets between parties with no pre-existing trust: A string of numbers is sent over email or text message in the simplest case. The sender doesn’t need to know or trust the receiver or vice versa. Related, there are no chargebacks – this is the part that is literally like cash – if you have the money or the asset, you can pay with it; if you don’t, you can’t. This is brand new. This has never existed in digital form before.

Bitcoin is a digital currency, whose value is based directly on two things: use of the payment system today – volume and velocity of payments running through the ledger – and speculation on future use of the payment system. This is one part that is confusing people. It’s not as much that the Bitcoin currency has some arbitrary value and then people are trading with it; it’s more that people can trade with Bitcoin (anywhere, everywhere, with no fraud and no or very low fees) and as a result it has value.

It is perhaps true right at this moment that the value of Bitcoin currency is based more on speculation than actual payment volume, but it is equally true that that speculation is establishing a sufficiently high price for the currency that payments have become practically possible. The Bitcoin currency had to be worth something before it could bear any amount of real-world payment volume. This is the classic “chicken and egg” problem with new technology: new technology is not worth much until it’s worth a lot. And so the fact that Bitcoin has risen in value in part because of speculation is making the reality of its usefulness arrive much faster than it would have otherwise.

Critics of Bitcoin point to limited usage by ordinary consumers and merchants, but that same criticism was leveled against PCs and the Internet at the same stage. Every day, more and more consumers and merchants are buying, using and selling Bitcoin, all around the world. The overall numbers are still small, but they are growing quickly. And ease of use for all participants is rapidly increasing as Bitcoin tools and technologies are improved. Remember, it used to be technically challenging to even get on the Internet. Now it’s not.

The criticism that merchants will not accept Bitcoin because of its volatility is also incorrect. Bitcoin can be used entirely as a payment system; merchants do not need to hold any Bitcoin currency or be exposed to Bitcoin volatility at any time. Any consumer or merchant can trade in and out of Bitcoin and other currencies any time they want.

Why would any merchant – online or in the real world – want to accept Bitcoin as payment, given the currently small number of consumers who want to pay with it? My partner Chris Dixon recently gave this example:

“Let’s say you sell electronics online. Profit margins in those businesses are usually under 5 percent, which means conventional 2.5 percent payment fees consume half the margin. That’s money that could be reinvested in the business, passed back to consumers or taxed by the government. Of all of those choices, handing 2.5 percent to banks to move bits around the Internet is the worst possible choice. Another challenge merchants have with payments is accepting international payments. If you are wondering why your favorite product or service isn’t available in your country, the answer is often payments.”

In addition, merchants are highly attracted to Bitcoin because it eliminates the risk of credit card fraud. This is the form of fraud that motivates so many criminals to put so much work into stealing personal customer information and credit card numbers.

Since Bitcoin is a digital bearer instrument, the receiver of a payment does not get any information from the sender that can be used to steal money from the sender in the future, either by that merchant or by a criminal who steals that information from the merchant.

Credit card fraud is such a big deal for merchants, credit card processors and banks that online fraud detection systems are hair-trigger wired to stop transactions that look even slightly suspicious, whether or not they are actually fraudulent. As a result, many online merchants are forced to turn away 5 to 10 percent of incoming orders that they could take without fear if the customers were paying with Bitcoin, where such fraud would not be possible. Since these are orders that were coming in already, they are inherently the highest margin orders a merchant can get, and so being able to take them will drastically increase many merchants’ profit margins.

Bitcoin’s antifraud properties even extend into the physical world of retail stores and shoppers.

For example, with Bitcoin, the huge hack that recently stole 70 million consumers’ credit card information from the Target department store chain would not have been possible. Here’s how that would work:

You fill your cart and go to the checkout station like you do now. But instead of handing over your credit card to pay, you pull out your smartphone and take a snapshot of a QR code displayed by the cash register. The QR code contains all the information required for you to send Bitcoin to Target, including the amount. You click “Confirm” on your phone and the transaction is done (including converting dollars from your account into Bitcoin, if you did not own any Bitcoin).

Target is happy because it has the money in the form of Bitcoin, which it can immediately turn into dollars if it wants, and it paid no or very low payment processing fees; you are happy because there is no way for hackers to steal any of your personal information; and organized crime is unhappy. (Well, maybe criminals are still happy: They can try to steal money directly from poorly-secured merchant computer systems. But even if they succeed, consumers bear no risk of loss, fraud or identity theft.)

Finally, I’d like to address the claim made by some critics that Bitcoin is a haven for bad behavior, for criminals and terrorists to transfer money anonymously with impunity. This is a myth, fostered mostly by sensationalistic press coverage and an incomplete understanding of the technology. Much like email, which is quite traceable, Bitcoin is pseudonymous, not anonymous. Further, every transaction in the Bitcoin network is tracked and logged forever in the Bitcoin blockchain, or permanent record, available for all to see. As a result, Bitcoin is considerably easier for law enforcement to trace than cash, gold or diamonds.

What’s the future of Bitcoin?

Bitcoin is a classic network effect, a positive feedback loop. The more people who use Bitcoin, the more valuable Bitcoin is for everyone who uses it, and the higher the incentive for the next user to start using the technology. Bitcoin shares this network effect property with the telephone system, the web, and popular Internet services like eBay and Facebook.

In fact, Bitcoin is a four-sided network effect. There are four constituencies that participate in expanding the value of Bitcoin as a consequence of their own self-interested participation. Those constituencies are (1) consumers who pay with Bitcoin, (2) merchants who accept Bitcoin, (3) “miners” who run the computers that process and validate all the transactions and enable the distributed trust network to exist, and (4) developers and entrepreneurs who are building new products and services with and on top of Bitcoin.

All four sides of the network effect are playing a valuable part in expanding the value of the overall system, but the fourth is particularly important.

All over Silicon Valley and around the world, many thousands of programmers are using Bitcoin as a building block for a kaleidoscope of new product and service ideas that were not possible before. And at our venture capital firm, Andreessen Horowitz, we are seeing a rapidly increasing number of outstanding entrepreneurs – not a few with highly respected track records in the financial industry – building companies on top of Bitcoin.

For this reason alone, new challengers to Bitcoin face a hard uphill battle. If something is to displace Bitcoin now, it will have to have sizable improvements and it will have to happen quickly. Otherwise, this network effect will carry Bitcoin to dominance.

One immediately obvious and enormous area for Bitcoin-based innovation is international remittance. Every day, hundreds of millions of low-income people go to work in hard jobs in foreign countries to make money to send back to their families in their home countries – over $400 billion in total annually, according to the World Bank. Every day, banks and payment companies extract mind-boggling fees, up to 10 percent and sometimes even higher, to send this money.

Switching to Bitcoin, which charges no or very low fees, for these remittance payments will therefore raise the quality of life of migrant workers and their families significantly. In fact, it is hard to think of any one thing that would have a faster and more positive effect on so many people in the world’s poorest countries.

Moreover, Bitcoin generally can be a powerful force to bring a much larger number of people around the world into the modern economic system. Only about 20 countries around the world have what we would consider to be fully modern banking and payment systems; the other roughly 175 have a long way to go. As a result, many people in many countries are excluded from products and services that we in the West take for granted. Even Netflix, a completely virtual service, is only available in about 40 countries. Bitcoin, as a global payment system anyone can use from anywhere at any time, can be a powerful catalyst to extend the benefits of the modern economic system to virtually everyone on the planet.

And even here in the United States, a long-recognized problem is the extremely high fees that the “unbanked” — people without conventional bank accounts – pay for even basic financial services. Bitcoin can be used to go straight at that problem, by making it easy to offer extremely low-fee services to people outside of the traditional financial system.

A third fascinating use case for Bitcoin is micropayments, or ultrasmall payments. Micropayments have never been feasible, despite 20 years of attempts, because it is not cost effective to run small payments (think $1 and below, down to pennies or fractions of a penny) through the existing credit/debit and banking systems. The fee structure of those systems makes that nonviable.

All of a sudden, with Bitcoin, that’s trivially easy. Bitcoins have the nifty property of infinite divisibility: currently down to eight decimal places after the dot, but more in the future. So you can specify an arbitrarily small amount of money, like a thousandth of a penny, and send it to anyone in the world for free or near-free.

Think about content monetization, for example. One reason media businesses such as newspapers struggle to charge for content is because they need to charge either all (pay the entire subscription fee for all the content) or nothing (which then results in all those terrible banner ads everywhere on the web). All of a sudden, with Bitcoin, there is an economically viable way to charge arbitrarily small amounts of money per article, or per section, or per hour, or per video play, or per archive access, or per news alert.

Another potential use of Bitcoin micropayments is to fight spam. Future email systems and social networks could refuse to accept incoming messages unless they were accompanied with tiny amounts of Bitcoin – tiny enough to not matter to the sender, but large enough to deter spammers, who today can send uncounted billions of spam messages for free with impunity.

Finally, a fourth interesting use case is public payments. This idea first came to my attention in a news article a few months ago. A random spectator at a televised sports event held up a placard with a QR code and the text “Send me Bitcoin!” He received $25,000 in Bitcoin in the first 24 hours, all from people he had never met. This was the first time in history that you could see someone holding up a sign, in person or on TV or in a photo, and then send them money with two clicks on your smartphone: take the photo of the QR code on the sign, and click to send the money.

Think about the implications for protest movements. Today protesters want to get on TV so people learn about their cause. Tomorrow they’ll want to get on TV because that’s how they’ll raise money, by literally holding up signs that let people anywhere in the world who sympathize with them send them money on the spot. Bitcoin is a financial technology dream come true for even the most hardened anticapitalist political organizer.

The coming years will be a period of great drama and excitement revolving around this new technology.

For example, some prominent economists are deeply skeptical of Bitcoin, even though Ben S. Bernanke, formerly Federal Reserve chairman, recently wrote that digital currencies like Bitcoin “may hold long-term promise, particularly if they promote a faster, more secure and more efficient payment system.” And in 1999, the legendary economist Milton Friedman said: “One thing that’s missing but will soon be developed is a reliable e-cash, a method whereby on the Internet you can transfer funds from A to B without A knowing B or B knowing A – the way I can take a $20 bill and hand it over to you, and you may get that without knowing who I am.”

Economists who attack Bitcoin today might be correct, but I’m with Ben and Milton.

Further, there is no shortage of regulatory topics and issues that will have to be addressed, since almost no country’s regulatory framework for banking and payments anticipated a technology like Bitcoin.

But I hope that I have given you a sense of the enormous promise of Bitcoin. Far from a mere libertarian fairy tale or a simple Silicon Valley exercise in hype, Bitcoin offers a sweeping vista of opportunity to reimagine how the financial system can and should work in the Internet era, and a catalyst to reshape that system in ways that are more powerful for individuals and businesses alike.

2014 Gates Annual Letter: Myths About Foreign Aid - Gates Foundation

Yesterday, The Internet Solved a 20-year-old Mystery - On The Media

$
0
0

Comments:" Yesterday, The Internet Solved a 20-year-old Mystery - On The Media"

URL:http://www.onthemedia.org/story/yesterday-internet-solved-20-year-old-mystery/


Back in October, we told a story on the TLDR podcast about Daniel Drucker. Drucker was looking through his recently deceased dad's computer when he found a document that contained only joke punchlines. He turned to the website Ask Metafilter for help. Within hours, the website's users had reunited the punchlines with their long lost setups.

It looks like they've done it again.

Yesterday afternoon, a user posted a thread asking for help with a decades old family mystery:

My grandmother passed away in 1996 of a fast-spreading cancer. She was non-communicative her last two weeks, but in that time, she left at least 20 index cards with scribbled letters on them. My cousins and I were between 8-10 years old at the time, and believed she was leaving us a code. We puzzled over them for a few months trying substitution ciphers, and didn't get anywhere.

The index cards appear to just be a random series of letters, and had confounded the poster's family for years. But it only took Metafilter 15 minutes to at least partially decipher. User harperpitt quickly realized she was using the first letters of words, and that she was, in fact, writing prayers:

Was she a religious woman? The last As, as well as the AAA combo, make me think of "Amen, amen, amen." So extrapolating -- TYAGF = "Thank you Almighty God for..."It would make sense to end with "Thank you, Almighty God, for everything, Amen - Thank you, Almighty God, for everything, Amen, Amen, Amen." AGH, YES! Sorry for the double post, but:OFWAIHHBTNTKCTWBDOEAIIIHFUTDODBAFUOTAWFTWTAUALUNITBDUFEFTITKTPATGFAEAOur Father who art in Heaven, hallowed be thy name... etc etc etc

The whole thread is fascinating. You should take a look at it. You might even be able to contribute. And if you haven't heard our interview with Daniel Drucker, you can listen to it below.

Backblaze Blog » What Hard Drive Should I Buy?

$
0
0

Comments:"Backblaze Blog » What Hard Drive Should I Buy?"

URL:http://blog.backblaze.com/2014/01/21/what-hard-drive-should-i-buy/


My last two blog posts were about expected drive lifetimes and drive reliability. These posts were an outgrowth of the careful work that we’ve done at Backblaze to find the most cost-effective disk drives. Running a truly unlimited online backup service for only $5 per month means our cloud storage needs to be very efficient and we need to quickly figure out which drives work.

Because Backblaze has a history of openness, many readers expected more details in my previous posts. They asked what drive models work best and which last the longest. Given our experience with over 25,000 drives, they asked which ones are good enough that we would buy them again. In this post, I’ll answer those questions.

Drive Population

At the end of 2013, we had 27,134 consumer-grade drives spinning in Backblaze Storage Pods. The breakdown by brand looks like this:

Hard Drives by Manufacturer Used by Backblaze Brand Numberof Drives Terabytes AverageAge in Years Seagate 12,765 39,576 1.4 Hitachi 12,956 36,078 2.0 Western Digital 2,838 2,581 2.5 Toshiba 58 174 0.7 Samsung 18 18 3.7

As you can see, they are mostly Seagate and Hitachi drives, with a good number of Western Digital thrown in. We don’t have enough Toshiba or Samsung drives for good statistical results.

Why do we have the drives we have? Basically, we buy the least expensive drives that will work. When a new drive comes on the market that looks like it would work, and the price is good, we test a pod full and see how they perform. The new drives go through initial setup tests, a stress test, and then a couple weeks in production. (A couple of weeks is enough to fill the pod with data.) If things still look good, that drive goes on the buy list. When the price is right, we buy it.

We are willing to spend a little bit more on drives that are reliable, because it costs money to replace a drive. We are not willing to spend a lot more, though.

Excluded Drives

Some drives just don’t work in the Backblaze environment. We have not included them in this study. It wouldn’t be fair to call a drive “bad” if it’s just not suited for the environment it’s put into.

We have some of these drives running in storage pods, but are in the process of replacing them because they aren’t reliable enough. When one drive goes bad, it takes a lot of work to get the RAID back on-line if the whole RAID is made up of unreliable drives. It’s just not worth the trouble.

The drives that just don’t work in our environment are Western Digital Green 3TB drives and Seagate LP (low power) 2TB drives. Both of these drives start accumulating errors as soon as they are put into production. We think this is related to vibration. The drives do somewhat better in the new low-vibration Backblaze Storage Pod, but still not well enough.

These drives are designed to be energy-efficient, and spin down aggressively when not in use. In the Backblaze environment, they spin down frequently, and then spin right back up. We think that this causes a lot of wear on the drive.

Failure Rates

We measure drive reliability by looking at the annual failure rate, which is the average number of failures you can expect running one drive for a year. A failure is when we have to replace a drive in a pod.

This chart has some more details that don’t show up in the pretty chart, including the number of drives of each model that we have, and how old the drives are:

Number of Hard Drives by Model at Backblaze Model Size Numberof Drives AverageAge inYears AnnualFailureRate Seagate Desktop HDD.15(ST4000DM000) 4.0TB 5199 0.3 3.8% Hitachi GST Deskstar 7K2000(HDS722020ALA330) 2.0TB 4716 2.9 1.1% Hitachi GST Deskstar 5K3000(HDS5C3030ALA630) 3.0TB 4592 1.7 0.9% Seagate Barracuda(ST3000DM001) 3.0TB 4252 1.4 9.8% Hitachi Deskstar 5K4000(HDS5C4040ALE630) 4.0TB 2587 0.8 1.5% Seagate Barracuda LP(ST31500541AS) 1.5TB 1929 3.8 9.9% Hitachi Deskstar 7K3000(HDS723030ALA640) 3.0TB 1027 2.1 0.9% Seagate Barracuda 7200(ST31500341AS) 1.5TB 539 3.8 25.4% Western Digital Green(WD10EADS) 1.0TB 474 4.4 3.6% Western Digital Red(WD30EFRX) 3.0TB 346 0.5 3.2% Seagate Barracuda XT(ST33000651AS) 3.0TB 293 2.0 7.3% Seagate Barracuda LP(ST32000542AS) 2.0TB 288 2.0 7.2% Seagate Barracuda XT(ST4000DX000) 4.0TB 179 0.7 n/a Western Digital Green(WD10EACS) 1.0TB 84 5.0 n/a Seagate Barracuda Green(ST1500DL003) 1.5TB 51 0.8 120.0%

The following sections focus on different aspects of these results.

1.5TB Seagate Drives

The Backblaze team has been happy with Seagate Barracuda LP 1.5TB drives. We’ve been running them for a long time – their average age is pushing 4 years. Their overall failure rate isn’t great, but it’s not terrible either.

The non-LP 7200 RPM drives have been consistently unreliable. Their failure rate is high, especially as they’re getting older.

1.5 TB Seagate Drives Used by Backblaze Model Size Numberof Drives AverageAge inYears AnnualFailureRate Seagate Barracuda LP(ST31500541AS) 1.5TB 1929 3.8 9.9% Seagate Barracuda 7200(ST31500341AS) 1.5TB 539 3.8 25.4% Seagate Barracuda Green(ST1500DL003) 1.5TB 51 0.8 120.0%

The Seagate Barracuda Green 1.5TB drive, though, has not been doing well. We got them from Seagate as warranty replacements for the older drives, and these new drives are dropping like flies. Their average age shows 0.8 years, but since these are warranty replacements, we believe that they are refurbished drives that were returned by other customers and erased, so they already had some usage when we got them.

Bigger Seagate Drives

The bigger Seagate drives have continued the tradition of the 1.5Tb drives: they’re solid workhorses, but there is a constant attrition as they wear out.

2.0 to 4.0 TB Seagate Drives Used by Backblaze Model Size Numberof Drives AverageAge inYears AnnualFailureRate Seagate Desktop HDD.15(ST4000DM000) 4.0TB 5199 0.3 3.8% Seagate Barracuda(ST3000DM001) 3.0TB 4252 1.4 9.8% Seagate Barracuda XT(ST33000651AS) 3.0TB 293 2.0 7.3% Seagate Barracuda LP(ST32000542AS) 2.0TB 288 2.0 7.2% Seagate Barracuda XT(ST4000DX000) 4.0TB 179 0.7 n/a

The good pricing on Seagate drives along with the consistent, but not great, performance is why we have a lot of them.

Hitachi Drives

If the price were right, we would be buying nothing but Hitachi drives. They have been rock solid, and have had a remarkably low failure rate.

Hitachi Drives Used by Backblaze Model Size Numberof Drives AverageAge inYears AnnualFailureRate Hitachi GST Deskstar 7K2000(HDS722020ALA330) 2.0TB 4716 2.9 1.1% Hitachi GST Deskstar 5K3000(HDS5C3030ALA630) 3.0TB 4592 1.7 0.9% Hitachi Deskstar 5K4000(HDS5C4040ALE630) 4.0TB 2587 0.8 1.5% Hitachi Deskstar 7K3000(HDS723030ALA640) 3.0TB 1027 2.1 0.9%

Western Digital Drives

Back at the beginning of Backblaze, we bought Western Digital 1.0TB drives, and that was a really good choice. Even after over 4 years of use, the ones we still have are going strong.

We wish we had more of the Western Digital Red 3TB drives (WD30EFRX). They’ve also been really good, but they came after we already had a bunch of the Seagate 3TB drives, and when they came out their price was higher.

Western Digital Drives Used by Backblaze Model Size Numberof Drives AverageAge inYears AnnualFailureRate Western Digital Green(WD10EADS) 1.0TB 474 4.4 3.6% Western Digital Red(WD30EFRX) 3.0TB 346 0.5 3.2% Western Digital Green(WD10EACS) 1.0TB 84 5.0 n/a

What About Drives That Don’t Fail Completely?

Another issue when running a big data center is how much personal attention each drive needs. When a drive has a problem, but doesn’t fail completely, it still creates work. Sometimes automated recovery can fix this, but sometimes a RAID array needs that personal touch to get it running again.

Each storage pod runs a number of RAID arrays. Each array stores data reliably by spreading data across many drives. If one drive fails, the data can still be obtained from the others. Sometimes, a drive may “pop out” of a RAID array but still seem good, so after checking that its data is intact and it’s working, it gets put back in the RAID to continue operation. Other times a drive may stop responding completely and look like it’s gone, but it can be reset and continue running.

Measuring the time spent in a “trouble” state like this is a measure of how much work a drive creates. Once again, Hitachi wins. Hitachi drives get “four nines” of untroubled operation time, while the other brands just get “two nines”.

Untroubled Operation of Drives by Manufacturer used at Backblaze Brand Active Trouble Number of Drives Seagate 99.72 0.28% 12459 Western Digital 99.83 0.17% 933 Hitachi 99.99 0.01% 12956

Drive Lifetime by Brand

The chart below shows the cumulative survival rate for each brand. Month by month, how many of the drives are still alive?

Hitachi does really well. There is an initial die-off of Western Digital drives, and then they are nice and stable. The Seagate drives start strong, but die off at a consistently higher rate, with a burst of deaths near the 20-month mark.

Having said that, you’ll notice that even after 3 years, by far most of the drives are still operating.

What Drives Is Backblaze Buying Now?

We are focusing on 4TB drives for new pods. For these, our current favorite is the Seagate Desktop HDD.15 (ST4000DM000). We’ll have to keep an eye on them, though. Historically, Seagate drives have performed well at first, and then had higher failure rates later.

Our other favorite is the Western Digital 3TB Red (WD30EFRX).

We still have to buy smaller drives as replacements for older pods where drives fail. The drives we absolutely won’t buy are Western Digital 3TB Green drives and Seagate 2TB LP drives.

A year and a half ago, Western Digital acquired the Hitachi disk drive business. Will Hitachi drives continue their excellent performance? Will Western Digital bring some of the Hitachi reliability into their consumer-grade drives?

At Backblaze, we will continue to monitor and share the performance of a wide variety of disk drive models. What has your experience been?


Geocodio | Ridiculously cheap bulk geocoding

Generate a Random Name - Fake Name Generator

An open letter to the Yale Community from Dean Mary Miller | Yale College

$
0
0

Comments:"An open letter to the Yale Community from Dean Mary Miller | Yale College"

URL:http://yalecollege.yale.edu/content/open-letter-yale-community-dean-mary-miller?


*** UPDATED ***

January 20, 2014

To the Yale community,

A great deal has happened since I posted my January 17 open letter regarding YBB+, so I write a second time with more information and the latest updates.

Many of you have written to me directly or posted public comments expressing your concerns that the University’s reaction to YBB+ was heavy-handed. In retrospect, I agree that we could have been more patient in asking the developers to take down information they had appropriated without permission, before taking the actions that we did. However, I disagree that Yale violated its policies on free expression in this situation.

The information at the center of this controversy is the faculty evaluation, which Yale began collecting, not as a course selection tool, but as a way of helping faculty members improve their teaching. When a faculty committee decided in 2003 to collect and post these evaluations online for student use, it gave careful consideration to the format and felt strongly that numerical data would be misleading and incomplete if they were not accompanied by student comments. The tool created by YBB+ set aside the richer body of information available on the Yale website, including student comments, and focused on simple numerical ratings. In doing so, the developers violated Yale’s appropriate use policy by taking and modifying data without permission, but, more importantly, they encouraged students to select courses on the basis of incomplete information. To claim that Yale’s effort to ensure that students received complete information somehow violated freedom of expression turns that principle on its head.

Although the University acted in keeping with its policies and principles, I see now that it erred in trying to compel students to have as a reference the superior set of data that the complete course evaluations provide. That effort served only to raise concerns about the proper use of network controls. In the end, students can and will decide for themselves how much effort to invest in selecting their courses.

Technology has moved faster than the faculty could foresee when it voted to make teaching evaluations available to students over a decade ago, and questions of who owns data are evolving before our very eyes. Just this weekend, we learned of a tool that replicates YBB+'s efforts without violating Yale’s appropriate use policy, and that leapfrogs over the hardest questions before us. What we now see is that we need to review our policies and practices. To that end, the Teaching, Learning, and Advising Committee, which originally brought teaching evaluations online, will take up the question of how to respond to these developments, and the appropriate members of the IT staff, along with the University Registrar, will review our responses to violations of University policy. We will also state more clearly the requirement/expectation for student software developers to consult with the University before creating applications that depend on Yale data, and we will create an easy means for them to do so.

I thank all who have written, either to me directly or publicly, for their thoughts and for the civility with which they expressed them.

Mary Miller
Dean of Yale College
Sterling Professor of History of Art

 

------------------------------------------------------------

 

January 17, 2014

To the Yale Community:

This past week, students in Yale College lost access to YBB+ because its developers, although acting with good intentions, used university resources without permission and violated the acceptable use policy that applies to all members of the Yale community. The timing for its users could not have been worse: over 1,000 of them had uploaded worksheets during the course selection period and relied on those worksheets to design their course schedules. And the means for shutting down the site immediately -- by blocking it -- led to charges that the university was suppressing free speech.

Free speech defines Yale's community; the people who belong to it understand that they are entitled to share their views just as they must tolerate the views of others, no matter how offensive. The right to free speech, however, does not entitle anyone to appropriate university resources. In the case of YBB+, developers were unaware that they were not only violating the appropriate use policy but also breaching the trust the faculty had put in the college to act as stewards of their teaching evaluations. Those evaluations, whose primary purpose is to inform instructors how to improve their teaching, became available to students only in recent years and with the understanding that the information they made available to students would appear only as it currently appears on Yale's sites -- in its entirety.

Members of the YCDO and the University Registrar met this week with the developers, and to good end: the developers learned more about the underlying problems with using data without permission, the importance of communicating in advance with the university on projects that require approval and cooperation, and some of the existing mechanisms for collaborating with the university, among them the Yale College Council. Administrators, for their part, heard more about the demand for better tools and guidelines for the growing number of student developers, the need for a better approach to students who violate the acceptable use policy -- in most cases unwittingly -- and the value students place on information contained in teaching evaluations. All parties agreed to work toward a positive outcome, and they remain in conversation with each other to that end.

Mary Miller
Dean of Yale College
Sterling Professor of History of Art


Article 36

$
0
0

Comments:""

URL:http://nbviewer.ipython.org/url/norvig.com/ipython/Economics.ipynb


This is a simulation of an economic marketplace in which there is a population of actors, each of which has a level of wealth (a single number) that changes over time. On each time step two agents (chosen by an interaction rule) interact with each other and exchange wealth (according to a transaction rule). The idea is to understand the evolution of the population's wealth over time. My hazy memory is that this idea came from a class by Prof. Sven Anderson at Bard (any errors or misconceptions here are due to my (Peter Norvig) misunderstanding of his idea). Why this is interesting: (1) an example of using simulation to model the world. (2) Many students will have preconceptions about how economies work that will be challenged by the results shown here.

Population Distributions

First things first: what should our initial population look like? We will provide several distribution functions (constant, uniform, Gaussian, etc.) and a sample function, which samples N elements form a distribution and then normalizes them to have a given mean. By default we will have N=5000 actors and an initial mean wealth of 100 simoleons.

In [299]:

importrandomimportmatplotlibimportmatplotlib.pyplotaspltN=5000# Default size of populationmu=100.# Default mean of population's wealthdefsample(distribution,N=N,mu=mu):"Sample from the distribution N times, then normalize results to have mean mu."returnnormalize([distribution()for_inrange(N)],mu*N)defconstant(mu=mu):returnmudefuniform(mu=mu,width=mu):returnrandom.uniform(mu-width/2,mu+width/2)defgauss(mu=mu,sigma=mu/3):returnrandom.gauss(mu,sigma)defbeta(alpha=2,beta=3):returnrandom.betavariate(alpha,beta)defpareto(alpha=4):returnrandom.paretovariate(alpha)defnormalize(numbers,total):"Scale the numbers so that they add up to total."factor=total/float(sum(numbers))return[x*factorforxinnumbers]

In a transaction, two actors come together; they have existing wealth levels X and Y. For now we will only consider transactions that conserve wealth, so our transaction rules will decide how to split up the pot of X+Y total wealth.

In [360]:

defrandom_split(X,Y):"Take all the money in the pot and divide it randomly between X and Y."pot=X+Ym=random.uniform(0,pot)returnm,pot-mdefwinner_take_most(X,Y,most=3/4.):"Give most of the money in the pot to one of the parties."pot=X+Ym=random.choice((most*pot,(1-most)*pot))returnm,pot-mdefwinner_take_all(X,Y):"Give all the money in the pot to one of the actors."returnwinner_take_most(X,Y,1.0)defredistribute(X,Y):"Give 55% of the pot to the winner; 45% to the loser."returnwinner_take_most(X,Y,0.55)defsplit_half_min(X,Y):"""The poorer actor only wants to risk half his wealth;  the other actor matches this; then we randomly split the pot."""pot=min(X,Y)m=random.uniform(0,pot)returnX-pot/2.+m,Y+pot/2.-m

How do you decide which parties interact with each other? The rule anyone samples two members of the population uniformly and independently, but there are other possible rules, like nearby(pop, k), which choses one member uniformly and then chooses a second within k index elements away, to simulate interactions within a local neighborhood.

In [356]:

defanyone(pop):returnrandom.sample(range(len(pop)),2)defnearby(pop,k=5):i=random.randrange(len(pop))j=i+random.choice((1,-1))*random.randint(1,k)returni,(j%len(pop))defnearby1(pop):returnnearby(pop,1)

Now let's describe the code to run the simulation and summarize/plot the results. The function simulate does the work; it runs the interaction function to find two actors, then calls the transaction function to figure out how to split their wealth, and repeats this T times. The only other thing it does is record results. Every so-many steps, it records some summary statistics of the population (by default, this will be every 25 steps).

What information do we record to summarize the population? Out of the N=5000 (by default) actors, we will record the wealth of exactly nine of them: the ones, in sorted-by-wealth order that occupy the 1% spot (that is, if N=5000, this would be the 50th wealthiest actor), then the 10%, 25% 1/3, and median; and then likewise from the bottom the 1%, 10%, 25% and 1/3.

(Note that we record the median, which changes over time; the mean is defined to be 100 when we start, and since all transactions conserve wealth, the mean will always be 100.)

What do we do with these results, once we have recorded them? First we print them in a table for the first time step, the last, and the middle. Then we plot them as nine lines in a plot where the y-axis is wealth and the x-axis is time (note that when the x-axis goes from 0 to 1000, and we have record_every=25, that means we have actually done 25,000 transactions, not 1000).

In [368]:

defsimulate(population,transaction_fn,interaction_fn,T,percentiles,record_every):"Run simulation for T steps; collect percentiles every 'record_every' time steps."results=[]fortinrange(T):i,j=interaction_fn(population)population[i],population[j]=transaction_fn(population[i],population[j])ift%record_every==0:results.append(record_percentiles(population,percentiles))returnresultsdefreport(distribution=gauss,transaction_fn=random_split,interaction_fn=anyone,N=N,mu=mu,T=5*N,percentiles=(1,10,25,33.3,50,-33.3,-25,-10,-1),record_every=25):"Print and plot the results of the simulation running T steps."# Run simulationpopulation=sample(distribution,N,mu)results=simulate(population,transaction_fn,interaction_fn,T,percentiles,record_every)# Print summaryprint('Simulation: {} * {}(mu={}) for T={} steps with {} doing {}:\n'.format(N,name(distribution),mu,T,name(interaction_fn),name(transaction_fn)))fmt='{:6}'+'{:10.2f} '*len(percentiles)print(('{:6}'+'{:>10} '*len(percentiles)).format('',*map(percentile_name,percentiles)))for(label,nums)in[('start',results[0]),('mid',results[len(results)//2]),('final',results[-1])]:printfmt.format(label,*nums)# Plot resultsforlineinzip(*results):plt.plot(line)plt.show()defrecord_percentiles(population,percentiles):"Pick out the percentiles from population."population=sorted(population,reverse=True)N=len(population)return[population[int(p*N/100.)]forpinpercentiles]defpercentile_name(p):return('median'ifp==50else'{} {}%'.format(('top'ifp>0else'bot'),abs(p)))defname(obj):returngetattr(obj,'__name__',str(obj))

Finally, let's run a simulation!

In [369]:

report(gauss,random_split)

How do we interpret this? Well, we can see the mass of wealth spreading out: the rish get richer and the poor get poorer. We know the rich get richer because the blue and green lines (top 10% and top 1%) are going up: the actor in the 1% position (the guy with the least money out of the 1%, or to put it another way, the most money out of the 99%) starts with 177.13 and ends up with 447.98 (note this is not necessarily the same guy, just the guy who ends up in that position). The guy at the 10% spot also gets richer, going from 141.87 to 228.06. The 25% and 33% marks stay roughly flat, but everyone else gets poorer! The median actor loses 30% of his wealth, and the bottom 1% actor loses almost 95% of his wealth.

Effect of Starting Population

Now let's see if the starting population makes any difference. My vague impression is that we're dealing with ergodic Markov chains and it doesn't much matter what state you start in. But let's see:

It looks like we can confirm that the starting population doesn't matter much—if we are using the random_split rule then in the end, wealth accumulates to the top third at the expense of the bottom two-thirds, regardless of starting population.

Effect of Transaction Rule

Now let's see what happens when we vary the transaction rule. The random_split rule produces inequality: the actor at the bottom quarter of the population has only about a third of the mean wealth, and the actor at the top 1% spot has 4.5 times the mean. Suppose we want a society with more income equality. We could use the split_half_min rule, in which each transaction has a throttle in that the poorer party only risks half of their remaining wealth. Or we could use the redistribute rule, in which the loser of a transaction still gets 45% of the total (meaning the loser will actually gain in many transactions). Let's see what effects these rules have. In analyzing these plots, note that they have different Y-axes.

We see that the redistribute rule is very effective in reducing income inequality: the lines of the plot all converge towards the mean of 100 instead of diverging. With the split_half_min rule, inequality increases at a rate about half as fast as random_split. However, the split_half_min plot looks like it hasn't converged yet (whereas all the other plots reach convergence at about the 500 mark). Let's try running split_half_min 10 times longer:

In [372]:

report(gauss,split_half_min,T=50*N)

It looks like split_half_minstill hasn't converged, and is continuing to (slowly) drive wealth to the top 10%.

Now let's shift gears: suppose that we don't care about decreasing income inequality; instead we want to increase opportunity for some actors to become wealthier. We can try the winner_take_most or winner_take_all rules (compared to the baseline random_split):

We see that the winner_take_most rule, in which the winner of a transaction takes 3/4 of the pot, does not increase the opportunity for wealth as much as random_split, but that winner_take_all is very effective at concentrating almost all the wealth in the hands of the top 10%, and makes the top 1% 4 times as wealthy as random_split.

That suggests we look at where the breaking point is. Let's consider several different amounts for what winner takes:

In [375]:

defwinner_take_80(X,Y):returnwinner_take_most(X,Y,0.80)defwinner_take_90(X,Y):returnwinner_take_most(X,Y,0.90)defwinner_take_95(X,Y):returnwinner_take_most(X,Y,0.95)report(gauss,winner_take_80)report(gauss,winner_take_90)report(gauss,winner_take_95)

We see that winner takes 80% produces results similar to random_split, and that winner takes 95% is similar to winner takes all for the top 10%, but is much kinder to the bottom 75%.

Suppose that transactions are constrained to be local; that you can only do business with your close neighbors. Will that make income more equitable, because there will be no large, global conglomorates? Let's see:

We see that the nearby rule, which limits transactions to your 5 closest neighbors in either direction (out of 5000 total actors), has a negligible effect on the outcome. I found that fairly surprising. But the nearby1 rule, which lets you do business only with your immediate left or right neighbor does have a slight effect towards income equality. The bottom quarter still do poorly, but the top 1% only gets to about 85% of what they get under unconstrained trade.

Snowden-haters are on the wrong side of history | The Reinvigorated Programmer

$
0
0

Comments:"Snowden-haters are on the wrong side of history | The Reinvigorated Programmer"

URL:http://reprog.wordpress.com/2014/01/20/snowden-haters-are-on-the-wrong-side-of-history/


In the autumn on 1963, J. Edgar Hoover’s FBI, worried at Martin Luther King’s growing influence, began tapping his phones and bugging his hotel rooms. They hoped to discredit him by gaining evidence that he was a communist, but found no such evidence. But they did find evidence that he was having affairs. The FBI gathered what they considered to be the most incriminating clips, and in November 1964 they anonymously sent tapes to him along with a letter telling him to commit suicide:

White people in this country have enough frauds of their own but I am sure they don’t have one at this time anywhere near your equal. [...] You are a colossal fraud and an evil, vicious one at that. [...] you don’t believe in any personal moral principles. You [...] have turned out to be not a leader but a dissolute, abnormal moral imbecile. [...] Your “honorary” degrees, your Nobel Prize (what a grim farce) and other awards will not save you. King, there is only one thing left for you to do. You know what it is. [...] There is but one way out for you. You better take it before your filthy, abnormal fraudulent self is bared to the nation.

I seems incredible that a law-enforcement agency could write this, but it’s well documented and uncontroversial that they did.

Jump forward fifty years, and here is what NSA analysts and Pentagon insiders are saying about ubiquitous-surveillance whistleblower Edward Snowden:

“In a world where I would not be restricted from killing an American, I personally would go and kill him myself. A lot of people share this sentiment.” “I would love to put a bullet in his head. I do not take pleasure in taking another human beings life, having to do it in uniform, but he is single-handedly the greatest traitor in American history.” “His name is cursed every day over here. Most everyone I talk to says he needs to be tried and hung, forget the trial and just hang him.”

Sounds kinda familiar, doesn’t it?

Meanwhile, Marc Thiessen, conservative commentator and previously George W. Bush speech-writer, is saying this:

Amnesty? Have they lost their minds? Snowden is a traitor to his country, who is responsible for the most damaging theft and release of classified information in American history. [...] Maybe we offer him life in prison instead of a firing squad, but amnesty? That would be insanity

Today, the third Monday in January, is Martin Luther King day.

Ever notice how we don’t have a J. Edgar Hoover day?

For anyone who’s paying attention to all this, the verdict of history is already in. Fools trying to paint Snowden as a spy are really not paying attention. For the hard of thinking, here is key observation: spies do not give their material to newspapers. An actual spy would have quietly disappeared with the damaging intel, and no-one in America would ever have known anything about it. Instead, Snowden has demonstrated extraordinary courage in doing what he knew to be the right thing — revealing a threat to the American constitution that he swore to uphold — even knowing it meant that his life as he knew it was over.

It seems perfectly clear that Snowden will eventually receive a full presidential pardon and a place in the history books as an American hero. It seems extremely unlikely that Obama will have the guts to issue the pardon (though I wouldn’t necessarily rule it out); his successor might not; his successor might not. But eventually a president with the perspective of history, clearly seeing Snowden in his place alongside Martin Luther King, Daniel Ellsberg and Rosa Parks, will issue that pardon. We can only hope it will be soon enough for Snowden to enjoy a good chunk of his life back in the country he loves.

So. The verdict of history on Snowden is really not in question.

The question that remains is what side of history commentators like Marc Thiessen, and all those conveniently anonymous NSA sources, want to be on. Because at the moment, they’re setting themselves up to be this decade’s J. Edgar Hoover, George Wallace and Bull Connor.

 

Check out my new Doctor Who book, the Eleventh Doctor

Like this:

LikeLoading...

AMC movie theater calls FBI to arrest a Google Glass user :: The Gadgeteer

'OpenBSD Foundation Fundraising for 2014' - MARC

Douglas Adams's Mac IIfx

$
0
0

Comments:"Douglas Adams's Mac IIfx"

URL:http://www.vintagemacworld.com/iifx.html


At the end of 2003, I was looking to buy a Mac IIfx for some hacking. I needed a Mac with six NuBus slots and the IIfx is the fastest model that fitted my requirements. One turned up on eBay and I was able to win the auction at a sensible price. The seller was a computer scrapper who had no knowledge or interest in the history of the system.

The system was purchased "untested, as is" and the photo accompanying the auction (see opposite) indicated that it wasn't going to be in pristine condition. When delivered the case was filthy and the steel RF shielding inside had surface rust indicating that it had been stored in a damp environment for a couple of years. The side of the case (psu end) had four grubby chunks of Blu Tac attached. Obviously a previous owner had decided to stand the IIfx on its side as a tower case and used the Blu Tac to stabilise it. (If you try this at home with a IIfx, please stand the case on the other end so that the psu ventilation slots aren't blocked.)

I stripped out the components, scraped off the majority of the Blu Tac and dumped the bare case in the bath tub with some detergent. As the photos show, some of the Blu Tac is still lodged around the ventilation slots and the underside of the case has a peculiar sunburn pattern. My IIfx still looks a mess but I only bought it for hacking anyway.

When switched on for the first time, it was clear that the last user had little understanding of how to store files on the hard disk. The root directory contained hundreds of MacWrite documents. Scrolling through them was a pain and, as I have no interest in other people's private affairs, I selected the lot and deleted them. That was mistake number one. I left the applications folder intact to have a look at later.

In its day, the IIfx was Apple's flagship computer and a well specified machine would have left little change from£10,000. My new purchase had 20MB RAM, an A4 portrait display card, a 256 colour Toby video card and a very noisy Fujitsu (non-Apple ROM) hard disk drive running System software 7.5.5. All of the blanking plates for the NuBus slots had been removed and it is likely that any useful cards had been removed as the Mac descended the scrap chain; the Open Transport preferences contained a reference to an ethernet adapter and the control panel for a Radius display card was installed.

The applications software installed on the system didn't look very interesting. All of the files I had deleted were MacWrite documents and it appeared that the IIfx had been used as a glorified word processor. However Retrospect Remote was installed for backups so somebody had been using the Mac for serious work previously. The last backup was performed on 02-02-1997 but, according to the last modified file dates, the system remained in use until March 1999. Some power user utilities from CE and More were also installed.

I started up MacWrite Pro and noticed that it was registered to "Douglas Adams, Serious Productions Ltd". I paid little attention to this as I had seen warez copies of Claris software where the registered user was Douglas Adams. I then started Claris Resolve, ignoring a warning dialog (mistake number two), and noted that this software was also registered to Douglas Adams. The copies of Claris Works 4.0 and Now Up-to-Date were registered to Jane Belson; I was unfamiliar with the name but a quick web search determined that she is Douglas Adams's widow.

Deleting all those files suddenly seemed like a dumb thing to have done... To undo mistake number one, I popped an ethernet card in the IIfx, mounted an AppleShare volume and ran Norton Utilities to recover the files onto the server.

The results? I recovered hundreds of documents relating to Jane Belson's professional work and precisely two that bear the hand of Douglas Adams. I doubt whether the copyright lawyers would chase me for publishing his Idiots Guide to using a Mac but you wouldn't be thanking me either. For now at least, the draft of a TV sketch called Brief Re-encounter is strictly for my personal enjoyment.

And mistake number two? I should have paid attention to the dialog box when I'd started up Claris Resolve. In twenty years of Mac use working on literally thousands of systems, I've only seen viruses half a dozen times so I ignored the warning. How wrong can you get... A precautionary scan a few hours later using the old Disinfectant application showed that Claris Resolve had been infected by the MBDF A virus and that every application that I had subsequently run was infected too. Cheers, Douglas!

Jane Belson contacted me earlier this year, telling me that she recognised the IIfx as the one that sat next to her desk and to that of Douglas. A copy of the sketch Brief Re-encounter was sent to her.

Leander Kahney reported on the IIfx in his Cult of Mac weblog at Wired magazine in March 2004 (Article).

Copyright information: If you wish to use any images on these pages, please contact the author, Phil Beesley on beesley@mandrake.demon.co.uk.

Payments startup Stripe has joined the Billion Dollar Club - WSJ.com

Dogecoin and the Appeal of Small Numbers | Diego Basch's Blog

$
0
0

Comments:"Dogecoin and the Appeal of Small Numbers | Diego Basch's Blog"

URL:http://diegobasch.com/dogecoin-and-the-appeal-of-small-numbers


Dogecoin is a unique phenomenon in the fascinating world of cryptocurrencies. It’s barely six weeks old, and as I write this post its network has more computing power than any other cryptocurrency except for Bitcoin. It made headlines this weekend when its community raised enough money to send the Jamaican bobsled team to the Sochi Winter Olympics.

From a technical standpoint, Dogecoin is essentially a branded clone of Litecoin (the second cryptocurrency in terms of total market value). Without a doubt one of the most important factors contributing to Dogecoin’s popularity is its community. The Dogecoin subreddit has almost 40k users right now. The front page usually has a good mix of humor, good will, finance, and technology. Check it out if you haven’t already.

There’s another more subtle factor that I believe plays in Dogecoin’s favor: its tiny value. One DOGE is worth about $0.0015 right now. In other words, one dollar buys you about 600-700 DOGE. Contrast that with Bitcoin: $1 is about 0.001 BTC. This puts Bitcoin and Dogecoin in two completely different mental buckets for most people. One BTC is comparable to an ounce of gold. The press reinforces this idea, and many people view Bitcoin as a digital store of value. The daily transaction volume of BTC is about 0.2 percent of the total bitcoins in existence, which means that BTC does not circulate very much yet.

Contrast this with Dogecoin, for which the daily transaction volume is close to 15%. Where does that money go? Perhaps the most common usage of DOGE is to give online tips. Compare the activity of Reddit’s bitcointip and dogetipbot, and you’ll see the latter is much more active. What would you prefer as a tip, 100 DOGE or 0.000002 BTC? Both are almost meaningless in terms of monetary value, but receiving 100 units of a coin does feel better. It’s also easier to give tips; you don’t have to think much about tipping someone 10, 25 or 100 DOGE. With BTC you either have to choose a dollar amount, or be very careful with the number of zeroes.

The reason a DOGE is worth so little is the total supply of coins. The Bitcoin software has an embedded constant called MAX_MONEY. For Bitcoin it’s set to 21 million, which means that if Bitcoin takes over as a world currency it will be impossible for most people to ever own one. Litecoin is only slightly better, at 84 million. For DOGE, it’s one hundred billion (perhaps more, yet to be decided). This makes it unlikely that one DOGE will be worth $1 any time soon (or ever). It’s easy and fun to exchange $20 for 10k DOGE and give a fraction of them to strangers on the internet. Anyone can still mine hundreds of dogecoins per day with a desktop computer, and not feel very attached to them. Being a “slumdoge millionaire” is still affordable to many.

In a world where people get a kick out of likes or retweets, Dogetips take it up a notch. A Dogetip is an upvote that you can use, internet karma points that are actually worth something. So fun, very value.

Image credit: /u/binxalot, this person deserves tips. Of course I accept them too

DHpZsQCDKq9WbqyqfetMcGq87pFZfkwLBh


How I “hacked” Kayak and booked a cheaper flight | Jose Casanova's Blog

$
0
0

Comments:"How I “hacked” Kayak and booked a cheaper flight | Jose Casanova's Blog"

URL:http://www.josecasanova.com/blog/how-i-hacked-kayak-and-booked-a-cheaper-flight/


Let me start this out by saying I didn’t “hack” something in the black hat Hackers way, but by finding a market inefficiency and leveraging it to my advantage. It must be the day trader in me. No harm was done to any computers or systems in the making of this post.

TL;DR: I booked a flight through Kayak using a VPN and saved ~$100.

Long version: I was looking for flights to New Orleans when I realized that the flight price I checked yesterday was ~$100 cheaper. I started to think why the price went up so much in one day and tried checking the flight again using only Google Incognito but there was no price budge. Maybe my VPN had something to do with it? The night before I was using a VPN (I use BTGuard btw) and Kayak thought I was from Toronto, Canada. I guess if you are not from the departure city then flights are cheaper?

So what did I do?

So I had originally went to Kayak today and checked flights from Miami to New Orleans (Mardi Gras, w00t!). This was done without a VPN but using Google’s Incognito feature. Take a look at how much the flights were:

Flights to New Orleans from Miami (Non-VPN)

Also, check out where my IP was saying that I was from:

This is my “real” IP, no VPN

I thought this was strange since the night before I had checked flights and they were ~$100 cheaper. I realized I was logged into my VPN and thought it might have to do with that (BTW, I use the VPN to mask my internet traffic… sorry NSA). So what did I do? Tried checking again while being logged into my VPN!

This is me being logged into my VPN:

VPN FTW!

And here is where my IP is saying that I am from:

Canada, eh?

So I tried Kayak again, while being “shown” as being from Canada and this is what I got:

Check out the Canadian flag at the top right

That is about a ~$70+ price difference (I don’t think that included taxes)! Also, when I had checked earlier, that $345 flight wasn’t there… so it was a +$100 difference. When I went to book my flight my checkout total was in EUROS! The thing is, it wasn’t 380 euros, but 207 euros!  That converts to about $280 USD.

Euros wuddup

Moral of the story? Try booking your flights through a VPN, maybe you’ll save a few bucks….. even if you pay in euros.

PS: I checked my online bank statement and I paid $281.60 total!

PSS: The flight is now over $400 on a non-VPN via Kayak + Google Incognito.

Discuss on Hacker News

We spent a week making Trello boards load extremely fast. Here’s how we did it. - Fog Creek Blog

$
0
0

Comments:" We spent a week making Trello boards load extremely fast. Here’s how we did it. - Fog Creek Blog "

URL:http://blog.fogcreek.com/we-spent-a-week-making-trello-boards-load-extremely-fast-heres-how-we-did-it/


We made a promise with Trello: you can see your entire project in a single glance. That means we can show you all of your cards so you can easily see things like who is doing what, where a task is in the process, and so forth, just by scrolling.

You all make lots of cards. But when the site went to load all of your hundreds and thousands of cards at once, boards were loading pretty slow. Okay, not just pretty slow, painfully slow. If you had a thousand or so cards, it would take seven to eight seconds to completely render. In that time, the browser was totally locked up. You couldn’t click anything. You couldn’t scroll. You just had to sit there.

With the big redesign, one of our goals was to make switching boards really easy. We like to think that we achieved that goal. But when the browser locked up every time you switched boards, it was an awfully slow experience. Who cared if the experience was easy? We had to make it fast.

So I set out on a mission: using a 906 card board on a 1400×1000 pixel window, I wanted to improve board rendering performance by 10% every day for a week. It was bold. It was crazy. Somebody might have said it was impossible. But I proved that theoretical person wrong. We more than achieved that goal. We got perceived rendering time for our big board down to one second.

Naturally, I kept track of my daily progress and implementation details in Trello. Here’s the log.

Monday (7.2 seconds down to 6.7 seconds. 7% reduction.)

Heavy styles like borders, shadows, and gradients can really slow down a browser. So the first thing we tried was removing things like borders on avatars, card borders, backgrounds and borders on card badges, shadows on lists, and the like. It made a big impact, especially for scrolling. We didn’t set out for a flat design. Our primary objective was to make things faster, but the result was a cleaner, simpler look.

Tuesday (6.7 seconds down to 5.9 seconds. 12% reduction.)

On the client, we use backbone.js to structure our app. With backbone, it’s really convenient to use views. Really, very convenient. For every card, we gave each member its own view. When you clicked on a member on a card, it came up with a mini-profile and a menu with an option to remove them from the card. All those extra views generated a lot of useless crap for the browser and used up a bunch of time.

So instead of using views for members, we now just render the avatars and use a generic click handler that looks for a data-idmem attribute on the element. That’s used to look up the member model to generate the menu view, but only when it’s needed. That made a difference.

I also gutted more CSS.

Wednesday (5.9 seconds… to 5.9 seconds. 0% reduction.)

I tried using the browser’s native innerHtml and getElementByClassName API methods instead of jQuery’s html and append. I thought native APIs might be easier for the browser to optimize and what I read confirmed that. But for whatever reason, it didn’t make much of a difference for Trello.

The rest of the day was a waste. I didn’t make much progress.

Thursday (5.9 seconds down to 960ms)

Thursday was a breakthrough. I tried two major things: preventing layout thrashing and progressive rendering. They both made a huge difference.

Preventing layout thrashing

First, layout thrashing. The browser does two major things when rendering HTML: layouts, which are calculations to determine the dimensions and position of the element, and paints, which make the pixels show up in the right spot with the correct color. Basically. We cut out some of the paints when we removed the heavy styles. There were fewer borders, backgrounds, and other pixels that the browser had to deal with. But we still had an issue with layouts.

Rendering a single card used to work like this. The card basics like the white card frame and card name were inserted into the DOM. Then we inserted the labels, then the members, then the badges, and so on. We did it this way because of another Trello promise: real-time updates. We needed a way to atomically render a section of a card when something changed. For example, when a member was added it triggered the cardView.renderMembers method so that it only rendered the members and didn’t need to re-render the whole card and cause an annoying flash.

Instead of building all the HTML upfront, inserting it into the DOM and triggering a layout just once; we built some HTML, inserted it into the DOM, triggered a layout, built more HTML, inserted it into the DOM, triggered a layout, built more HTML, and so on. Multiple insertions for each card. Times a thousand. That’s a lot of layouts. Now we render those sections before inserting the card into the DOM, which prevents a bunch of layouts and speeds things up.

In the old way, the card view render function looked something like this…

render: ->
 data = model.toJSON()
 @$.innerHTML = templates.fill(
 'card_in_list',
 data
 ) # add stuff to the DOM, layout
 @renderMembers() # add more stuff to the DOM, layout
 @renderLabels() # add even more stuff to the DOM, layout
 @

With the change, the render function looks something like this…

render: ->
 data = model.toJSON()
 data.memberData = []
 for member in members
 memberData.push member.toJSON()
 data.labelData = []
 for labels in labels when label.isActive
 labelData.push label
 partials = 
 "member": templates.member
 "label": templates.label
 @$.innerHTML = templates.fill(
 'card_in_list',
 data,
 partials
 ) # only add stuff to the DOM once, only one layout
 @

We had more layout problems, though. In the past, the width of the list would adjust to your screen size. So if you had three lists, it would try to fill up as much as the screen as possible. It was a subtle effect. The problem was that when the adjustment happened, the layout of every list and every card would need to be changed, causing major layout thrashing. And it triggered often: when you toggled the sidebar, added a list, resized the window, or whatnot. We tried having lists be a fixed width so we didn’t have to do all the calculations and layouts. It worked well so we kept it. You don’t get the adjustments, but it was a trade-off we were willing to make.

Progressive rendering

Even with all the progress, the browser was still locking up for five seconds. That was unacceptable, even though I technically reached my goal. According to Chrome DevTools’ Timeline, most of the time was being spent in scripts. Trello developer Brett Kiefer had fixed a previous UI lockup by deferring the initialization of jQuery UI droppables until after the board had been painted using the queue method in the async library. In that case, “click … long task … paint” became ”click … paint … long task“.

I wondered if a similar technique could be used for rendering cards progressively. Instead of spending all of the browser’s time generating one huge amount of DOM to insert, we could generate a small amount of DOM, insert it, generate another small amount, insert it, and so forth, so that the browser could free up the UI thread, paint something quickly, and prevent locking up. This really did the trick. Perceived rendering went down to 960ms on my 1,000 card board.

That looks something like this…

Here’s how the code works. Cards in a list are contained in a backbone collection. That collection has its own view. The card collection view render method with the queueing technique looks like this, roughly…

render: ->
 renderQueue = new async.queue (models, next) =>
 @appendSubviews(@subview(CardView, model) for model in models)
 # _.defer, a.k.a. setTimeout(fn, 0), will yield the UI thread 
 # so the browser can paint.
 _.defer next
 , 1
 chunkSize = 30
 models = @getModels()
 modelChunks = []
 while models.length > 0
 modelChunks.push(models.splice(0, chunkSize))
 for models in modelChunks
 # async.queue flattens arrays so lets wrap this array 
 # so it’s an array on the other end...
 renderQueue.push [models]
 @

We could probably just do a for loop with a setTimeout 0 and get the same effect since we know the size of the array. But it worked, so I was happy. There is still some slowness as the cards finish rendering on really big boards, but compared to total browser lock-up, we’ll accept that trade-off.

Trello developer Daniel LeCheminant chipped in by queueing event delegation on cards. Every card has a certain number of events for clicking, dragging, and so forth. It’s more stuff we can put off until later.

We also used the translateZ: 0 hack for a bit of gain. With covers, stickers, and member avatars, cards can have a lot of images. In your CSS, if you apply translateZ: 0 to the image element, you trick the browser into using the GPU to paint it. That frees up the CPU to do one of the many other things it needs to do. This browser behavior could change any day which makes it a hack, but hey, it worked.

Friday

I made a lot of bugs that week, so I fixed them on Friday.

That was the whole week. If rendering on your web client is slow, look for excessive paints and layouts. I highly recommend using Chrome DevTool’s Timeline to help you find trouble areas. If you’re in a situation where you need to render a lot of things at once, look into async.queue or some other progressive rendering.

Now that we have starred boards and fast board switching and rendering, it’s easier than ever to using multiple boards for your project. We wrote “Using Multiple Boards for a Super-Flexible Workflow” on the Trello blog to show you how to do it. On the UserVoice blog, there’s a great article about how they structure their workflow into different boards. Check those out.

If you’ve got questions, I’ll try and answer them on Twitter. Go try out the the latest updates on trello.com. It’s faster, easier, and more beautiful than ever.

Google Video Quality Report

support SQLite for functional testing purposes by bendavies · Pull Request #12 · fre5h/DoctrineEnumBundle · GitHub

Faker by joke2k

$
0
0

Comments:"Faker by joke2k"

URL:http://www.joke2k.net/faker/


_|_|_|_| _|
_| _|_|_| _| _| _|_| _| _|_|
_|_|_| _| _| _|_| _|_|_|_| _|_|
_| _| _| _| _| _| _|
_| _|_|_| _| _| _|_|_| _|

Faker

Faker is a Python package that generates fake data for you. Whether you need to bootstrap your database, create good-looking XML documents, fill-in your persistence to stress test it, or anonymize data taken from a production service, Faker is for you.

Faker is heavily inspired by PHP's Faker, Perl's Data::Faker, and by ruby's Faker.

Basic Usage

Install with pip:

pip install fake-factory

Use faker.Factory.create() to create and initialize a faker generator, which can generate data by accessing properties named after the type of data you want.

fromfakerimportFactoryfaker=Factory.create()faker.name()# 'Lucy Cechtelar'faker.address()# "426 Jordy Lodge# Cartwrightshire, SC 88120-6700"faker.text()# Sint velit eveniet. Rerum atque repellat voluptatem quia rerum. Numquam excepturi# beatae sint laudantium consequatur. Magni occaecati itaque sint et sit tempore. Nesciunt# amet quidem. Iusto deleniti cum autem ad quia aperiam.# A consectetur quos aliquam. In iste aliquid et aut similique suscipit. Consequatur qui# quaerat iste minus hic expedita. Consequuntur error magni et laboriosam. Aut aspernatur# voluptatem sit aliquam. Dolores voluptatum est.# Aut molestias et maxime. Fugit autem facilis quos vero. Eius quibusdam possimus est.# Ea quaerat et quisquam. Deleniti sunt quam. Adipisci consequatur id in occaecati.# Et sint et. Ut ducimus quod nemo ab voluptatum.

Each call to method faker.name() yealds a different (random) result. This is because faker uses __getattr__ magic, and forwards faker.Genarator.method_name()' calls tofaker.Generator.format(method_name)`.

foriinrange(0,10):printfaker.name()# Adaline Reichel# Dr. Santa Prosacco DVM# Noemy Vandervort V# Lexi O'Conner# Gracie Weber# Roscoe Johns# Emmett Lebsack# Keegan Thiel# Wellington Koelpin II# Ms. Karley Kiehn V

Formatters

Each of the generator properties (like name, address, and lorem) are called "formatters". A faker generator has many of them, packaged in "providers". Here is a list of the bundled formatters in the default locale.

faker.providers.File:

fake.mimeType() # video/webm

faker.providers.UserAgent:

fake.chrome() # Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_8_4) AppleWebKit/5341 (KHTML, like Gecko) Chrome/13.0.803.0 Safari/5341
fake.firefox() # Mozilla/5.0 (Windows 95; sl-SI; rv:1.9.1.20) Gecko/2012-01-06 22:35:05 Firefox/3.8
fake.internetExplorer() # Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.1)
fake.linuxPlatformToken() # X11; Linux x86_64
fake.linuxProcessor() # x86_64
fake.macPlatformToken() # Macintosh; U; PPC Mac OS X 10_7_6
fake.macProcessor() # U; PPC
fake.opera() # Opera/9.41 (Windows CE; it-IT) Presto/2.9.168 Version/12.00
fake.safari() # Mozilla/5.0 (Windows; U; Windows NT 5.1) AppleWebKit/534.34.4 (KHTML, like Gecko) Version/5.0 Safari/534.34.4
fake.userAgent() # Mozilla/5.0 (iPod; U; CPU iPhone OS 3_2 like Mac OS X; en-US) AppleWebKit/531.15.3 (KHTML, like Gecko) Version/4.0.5 Mobile/8B119 Safari/6531.15.3
fake.windowsPlatformToken() # Windows 98; Win 9x 4.90

faker.providers.PhoneNumber:

fake.phoneNumber() # (593)652-1880

faker.providers.Miscelleneous:

fake.boolean() # True
fake.countryCode() # BB
fake.languageCode() # fr
fake.locale() # pt_GN
fake.md5() # ab9d3552b5c6e68714c04c35725ba73c
fake.nullBoolean() # True
fake.sha1() # 3fc2ede28f2596050f9a94c15c59b800175409d0
fake.sha256() # f06561a971d6b1306ecef60be336556d6de2540c2d0d2158f4d0ea3f212cd740

faker.providers.Internet:

fake.companyEmail() # ggreenfelder@ortizmedhurst.com
fake.domainName() # mayer.com
fake.domainWord() # gusikowski
fake.email() # gbrakus@johns.net
fake.freeEmail() # abbey60@yahoo.com
fake.freeEmailDomain() # hotmail.com
fake.ipv4() # 81.132.249.71
fake.ipv6() # 4c55:8c8b:54b5:746d:44ed:c7ab:486a:a50e
fake.safeEmail() # amalia49@example.com
fake.slug() # TypeError
fake.tld() # net
fake.uri() # http://www.parker.com/
fake.uriExtension() # .asp
fake.uriPage() # terms
fake.uriPath() # explore/list/app
fake.url() # http://dubuque.info/
fake.userName() # goodwin.edwin

faker.providers.Company:

fake.bs() # maximize end-to-end infrastructures
fake.catchPhrase() # Multi-tiered analyzing instructionset
fake.company() # Stanton-Luettgen
fake.companySuffix() # Group

faker.providers.DateTime:

fake.amPm() # AM
fake.century() # IX
fake.date() # 1985-02-17
fake.dateTime() # 1995-06-08 14:46:50
fake.dateTimeAD() # 1927-12-17 23:08:46
fake.dateTimeBetween() # 1999-08-22 22:49:52
fake.dateTimeThisCentury() # 1999-07-24 23:35:49
fake.dateTimeThisDecade() # 2008-01-27 01:08:37
fake.dateTimeThisMonth() # 2012-11-12 14:13:04
fake.dateTimeThisYear() # 2012-05-19 00:40:00
fake.dayOfMonth() # 23
fake.dayOfWeek() # Friday
fake.iso8601() # 2009-04-09T21:30:02
fake.month() # 03
fake.monthName() # April
fake.time() # 06:16:50
fake.timezone() # America/Noronha
fake.unixTime() # 275630166
fake.year() # 2002

faker.providers.Person:

fake.firstName() # Elton
fake.lastName() # Schowalter
fake.name() # Susan Pagac III
fake.prefix() # Ms.
fake.suffix() # V

faker.providers.Address:

fake.address() # 044 Watsica Brooks
 West Cedrickfort, SC 35023-5157
fake.buildingNumber() # 319
fake.city() # Kovacekfort
fake.cityPrefix() # New
fake.citySuffix() # ville
fake.country() # Monaco
fake.geo_coordinate() # 148.031951
fake.latitude() # 154.248666
fake.longitude() # 109.920335
fake.postcode() # 82402-3206
fake.secondaryAddress() # Apt. 230
fake.state() # Nevada
fake.stateAbbr() # NC
fake.streetAddress() # 793 Haskell Stravenue
fake.streetName() # Arvilla Valley
fake.streetSuffix() # Crescent

faker.providers.Lorem:

fake.paragraph() # Itaque quia harum est autem inventore quisquam eaque. Facere mollitia repudiandae
 qui et voluptas. Consequatur sunt ullam blanditiis aliquam veniam illum voluptatem.
fake.paragraphs() # ['Alias porro soluta eum voluptate. Iste consequatur qui non nam.',
 'Id eum sint eius earum veniam fugiat ipsum et. Et et occaecati at labore
 amet et. Rem velit inventore consequatur facilis. Eum consequatur consequatur
 quis nobis.', 'Harum autem autem totam ex rerum adipisci magnam adipisci.
 Qui modi eos eum vel quisquam. Tempora quas eos dolorum sint voluptatem
 tenetur cum. Recusandae ducimus deleniti magnam ullam adipisci ipsa.']
fake.sentence() # Eum magni soluta unde minus nobis.
fake.sentences() # ['Ipsam eius aut veritatis iusto.',
 'Occaecati libero a aut debitis sunt quas deserunt aut.',
 'Culpa dolor voluptatum laborum at et enim.']
fake.text() # Dicta quo eius possimus quae eveniet cum nihil. Saepe sint non nostrum.
 Sequi est sit voluptate et eos eum et. Pariatur non sunt distinctio magnam.
fake.word() # voluptas
fake.words() # ['optio', 'et', 'voluptatem']

Localization

faker.Factory can take a locale as an argument, to return localized data. If no localized provider is found, the factory fallbacks to the default locale (en_EN).

from faker import Factory
fake = Factory.create('it_IT')
for i in range(0,10):
 print fake.name()
# Elda Palumbo
# Pacifico Giordano
# Sig. Avide Guerra
# Yago Amato
# Eustachio Messina
# Dott. Violante Lombardo
# Sig. Alighieri Monti
# Costanzo Costa
# Nazzareno Barbieri
# Max Coppola

You can check available Faker locales in the source code, under the providers package. The localization of Faker is an ongoing process, for which we need your help. Don't hesitate to create localized providers to your own locale and submit a PR!

Using from shell

In a python environment with faker installed you can use it with:

python -m faker [option] [*args]

[option]:

  • formatter name as text, address: display result of fake
  • Provider name as Lorem: display all Provider's fakes

[*args]: pass value to formatter (actually only strings)

$ python -m faker address
968 Bahringer Garden Apt. 722
Kristinaland, NJ 09890

Seeding the Generator

You may want to get always the same generated data - for instance when using Faker for unit testing purposes. The generator offers a seed() method, which seeds the random number generator. Calling the same script twice with the same seed produces the same results.

from faker import Faker
fake = Faker()
fake.seed(4321)
print fake.name() # Margaret Boehm

Tests

Run tests:

$ python setup.py test

or

$ python -m unittest -v faker.tests

Write documentation for providers:

$ python -m faker > docs.txt

License

Faker is released under the MIT Licence. See the bundled LICENSE file for details.

Credits

Viewing all 5394 articles
Browse latest View live