Quantcast
Channel: Hacker News 100
Viewing all 5394 articles
Browse latest View live

Startup developing free HIV/AIDS vaccine accepted into Y Combinator | VentureBeat | Health | by Rebecca Grant

$
0
0

Comments:"Startup developing free HIV/AIDS vaccine accepted into Y Combinator | VentureBeat | Health | by Rebecca Grant"

URL:http://venturebeat.com/2014/01/23/startup-developing-hivaids-vaccine-is-2nd-nonprofit-accepted-into-y-combinator/


A vaccine for HIV/AIDS has been the holy grail of the medical community for decades, and these guys may have found it.

Immunity Project is developing a free HIV/AIDS vaccine through a radical new approach that involves data analysis and machine learning. It is one of seven nonprofits in elite accelerator program Y Combinator’s latest batch. YC accepted its first nonprofit Watsi last year.

Immunity Project also launched a crowdfunding campaign using Crowdhoster today with the goal of raising $482,000. This money will help Immunity fund its final experiment using human blood before it begins the first phase of clinical trials.

“This is the ultimate application of informatics to medicine,” cofounder and CEO Dr. Reid Rubsamen said in an interview. “So much vaccine design since 1953 has been based on neutralizing antibodies, but that legacy approach doesn’t work for HIV. The virus is too smart and can mutate so quickly. We are doing something very different.”

When cells are infected by HIV, they send pieces of HIV protein, or “flags,” to their surface for the immune system to identify and attack. However there are hundreds of signals coming from cells, and most people’s bodies don’t have the ability to pick out the HIV cells from all the other noise, except for a small group of people known as “controllers.”

One out of every 300 people living with HIV is a “controller,” meaning they carry low levels of the virus in a dormant state that never turns into AIDS. Their “immunity” to the virus is due to a unique targeting capability in their immune system that enables it to neutralize HIV molecules by hitting them in weak spots.

Immunity’s algorithm sorts through the enormous amount of combinations of HIV genome and human immune system genetic data to figure out how controllers are able to keep HIV dormant.

“All the information from the immune system and the HIV genome generates this really, really big dataset,” Rubsamen said. “We use machine learning to understand what is happening with this dataset and reverse engineer this biological process. Hitting that tiny target is the output of a giant computer science effort.”

The goal of the vaccine is basically to turn everyone into a controller by training the immune system to attack the right targets.

The algorithm Immunity uses was written by Dr. David Heckerman and his colleagues at Microsoft e-Science Research. It is actually based on similar principles to spam filtering software because both spam and HIV “spread rapidly, mutate relentlessly, and have multitude of variations.”

“Dr. Heckerman designed algorithms to find the part on HIV that absolutely cannot mutate — the place where if it changes, the virus stops functioning,” Immunity said on its site. “In both cases, researches are using machine learning to create statistical ways of dealing with large data sets in order to find the needle in the haystack.”

Rubsamen said this is the first vaccine in history developed in this matter and is viewed as a rogue project by the immunology community. That said, early tests on mice have shown “overwhelmingly positive results.”

Rubsamen and Heckerman are both medical doctors and computer scientists. The two went to Stanford Medical School together and have known each other for decades. Rubsamen previously founded Aradigm, a public pharmaceutical company that develops inhalation drug products. He has more than 60 patents for drug delivery technologies and said he is leveraging this expertise to create a nasal spray, rather than an injectable vaccine, which will make it easier to administer in the developing world.

Over 35 million people are currently living with HIV/AIDS, and 6,300 new people are infected everyday. Over 4,000 people die a day from AIDS, and nearly 36 million people  have died of HIV-related causes.

70 percent of all people living with HIV live in sub-Saharan Africa.

Rubsamen said that, unlike retrovrial treatments, Immunity wouldn’t require a lifetime of medication — you could take the vaccine and be done. It also has potential to act preventatively. Plus Immunity plans to make the vaccine free.

However there are still years of work, experimenting, clinical trials, and federal approvals to get through and millions of dollars to be raised. The vaccine has not officially entered Phase 1 yet and hopes to begin clinical trials in December 2014. The third and final phase of clinical trials, and widespread vaccination efforts, wouldn’t happen before 2016.

If it works, this will be a huge victory in the fight against HIV/AIDS and a watershed moment in immunology.

The Immunity Project is a partnership between biotech firm Flow Pharma and digital agency SparkArt. Rubsamen is the CEO of Flow Pharma, and cofounder Naveen Jain is the cofounder of SparkArt. Microsoft Research contributed $1 million to the project in 2011.

VentureBeat is creating an index of the top online health services for consumers. Take a look at our initial suggestions and complete the survey to help us build a definitive index. We’ll publish the official index in the weeks to come, and for those who fill out they survey, we’ll send you an expanded report free of charge. Speak with the analyst who put this survey together to get more in-depth information, inquire within.

Linda Liukas' Programming book for Children has Huge First Day on Kickstarter

$
0
0

Comments:"Linda Liukas' Programming book for Children has Huge First Day on Kickstarter"

URL:http://arcticstartup.com/2014/01/23/linda-liukas-programming-book-for-children-has-huge-first-day-on-kickstarter


Linda Liukas' Programming book for Children has Huge First Day on Kickstarter By Greg Anderson, January 23, 2014, Leave a Comment


It's been amazing so far," says Linda Liukas of the new programming book for children, Hello Ruby, which was put on Kickstarter this morning. "In 3.5 hours it reached its [$10,000] goal. Let's see what happens next."

The topic of coding education isn't new to her - on top of her early involvement in the Aalto entrepreneurship society, Liukas is a cofounder in Railsgirls, a global non-profit that has taught programming to tens of thousands of women in over 160 cities. Additionally she was one of the early Codeacademy employees where she worked as a community manager.

Liukas tells us that the Hello Ruby project got its roots three years ago when Railsgirls got started and they needed some illustrations for the web and their events. "I found when I had problems thinking about garbage collection, for example, I would draw Ruby. And then last September I decided I need to be a little more systematic and someone said, 'oh, you should do a Kickstarter.'"

At time of publishing, the book has now raised $16,283 with 29 days left to go.

The Hello Ruby book will be a 32 page hardcover that covers the traditional story of friendship, being different, and technology. Rather than an artsy how-to, the book will tell the story of Ruby, a small girl who visits castles and solves problems with wise penguins.

"We instinctively thick in narrative," says Liukas. "Instead of just giving kids iPad applications that react once, I think theres longer term value in this style."

Additionally the bundle will come with a workbook for parents to sit down with their kids and think about solving problems with general programming concepts like loops, lists conditionals, sequences, and variables. Rather than punishing kids for forgetting that semicolon, the book and workbook is more focused on getting kids thinking about solving problems and teaching basic programming concepts so that later they can apply them.

She promises on the Kickstarter that Hello Ruby will be shipped by August, but Liukas tells us that she recognizes everyone wants the book sooner than that. This is her first dive into the publishing world, so she promises to keep backers updated with how her progress is going. As you can see in the video below, she's kind a of a delightful person, so you might as well throw in the minimum $5 commitment just to hear how the publishing progress goes.

Based on the big initial response, Liukas tells us she might put up some milestone rewards for her backers. And perhaps later an iPad app later wouldn't be out of the question, but right now she's focused in Ruby's hardcover world.

You can find the project on Kickstarter.

Support ArcticStartup by helping us get the word out:

Donut math: how donut.c works -- a1k0n

$
0
0

Comments:"Donut math: how donut.c works -- a1k0n"

URL:http://www.a1k0n.net/2011/07/20/donut-math.html


There has been a sudden resurgence of interest in my "donut" code from 2006, and I’ve had a couple requests to explain this one. It’s been five years now, so it’s not exactly fresh in my memory, so I will reconstruct it from scratch, in great detail, and hopefully get approximately the same result.

This is the code and the output, animated in Javascript: toggle animation

 k;double sin()
 ,cos();main(){float A=
 0,B=0,i,j,z[1760];char b[
 1760];printf("\x1b[2J");for(;;
 ){memset(b,32,1760);memset(z,0,7040)
 ;for(j=0;6.28>j;j+=0.07)for(i=0;6.28>i;i+=0.02){float c=sin(i),d=cos(j),e=
 sin(A),f=sin(j),g=cos(A),h=d+2,D=1/(c*
 h*e+f*g+5),l=cos (i),m=cos(B),n=s\
in(B),t=c*h*g-f* e;int x=40+30*D*
(l*h*m-t*n),y= 12+15*D*(l*h*n
+t*m),o=x+80*y, N=8*((f*e-c*d*g
 )*m-c*d*e-f*g-l *d*n);if(22>y&&
 y>0&&x>0&&80>x&&D>z[o]){z[o]=D;;;b[o]=
 ".,-~:;=!*#$@"[N>0?N:0];}}/*#****!!-*/
 printf("\x1b[H");for(k=0;1761>k;k++)
 putchar(k%80?b[k]:10);A+=0.04;B+=
 0.02;}}/*****####*******!!=;:~
 ~::==!!!**********!!!==::-
 .,~~;;;========;;;:~-.
 ..,--------,*/

At its core, it’s a framebuffer and a Z-buffer into which I render pixels. Since it’s just rendering relatively low-resolution ASCII art, I massively cheat. All it does is plot pixels along the surface of the torus at fixed-angle increments, and does it densely enough that the final result looks solid. The “pixels” it plots are ASCII characters corresponding to the illumination value of the surface at each point: .,-~:;=!*#$@ from dimmest to brightest. No raytracing required.

So how do we do that? Well, let’s start with the basic math behind 3D perspective rendering. The following diagram is a side view of a person sitting in front of a screen, viewing a 3D object behind it.

To render a 3D object onto a 2D screen, we project each point (x,y,z) in 3D-space onto a plane located z’ units away from the viewer, so that the corresponding 2D position is (x’,y’). Since we’re looking from the side, we can only see the y and z axes, but the math works the same for the x axis (just pretend this is a top view instead). This projection is really easy to obtain: notice that the origin, the y-axis, and point (x,y,z) form a right triangle, and a similar right triangle is formed with (x’,y’,z’). Thus the relative proportions are maintained:

So to project a 3D coordinate to 2D, we scale a coordinate by the screen distance z’. Since z’ is a fixed constant, and not functionally a coordinate, let’s rename it to K1, so our projection equation becomes . We can choose K1 arbitrarily based on the field of view we want to show in our 2D window. For example, if we have a 100x100 window of pixels, then the view is centered at (50,50); and if we want to see an object which is 10 units wide in our 3D space, set back 5 units from the viewer, then K1 should be chosen so that the projection of the point x=10, z=5 is still on the screen with x’< 50: 10K1/5 < 50, or K1< 25.

When we’re plotting a bunch of points, we might end up plotting different points at the same (x’,y’) location but at different depths, so we maintain a z-buffer which stores the z coordinate of everything we draw. If we need to plot a location, we first check to see whether we’re plotting in front of what’s there already. It also helps to compute z-1 and use that when depth buffering because:

  • z-1 = 0 corresponds to infinite depth, so we can pre-initialize our z-buffer to 0 and have the background be infinitely far away
  • we can re-use z-1 when computing x’ and y’: Dividing once and multiplying by z-1 twice is cheaper than dividing by z twice.

Now, how do we draw a donut, AKA torus? Well, a torus is a solid of revolution, so one way to do it is to draw a 2D circle around some point in 3D space, and then rotate it around the central axis of the torus. Here is a cross-section through the center of a torus:

So we have a circle of radius R1 centered at point (R2,0,0), drawn on the xy-plane. We can draw this by sweeping an angle — let’s call it θ— from 0 to 2π:

Now we take that circle and rotate it around the y-axis by another angle — let’s call it φ. To rotate an arbitrary 3D point around one of the cardinal axes, the standard technique is to multiply by a rotation matrix. So if we take the previous points and rotate about the y-axis we get:

But wait: we also want the whole donut to spin around on at least two more axes for the animation. They were called A and B in the original code: it was a rotation about the x-axis by A and a rotation about the z-axis by B. This is a bit hairier, so I’m not even going write the result yet, but it’s a bunch of matrix multiplies.

Churning through the above gets us an (x,y,z) point on the surface of our torus, rotated around two axes, centered at the origin. To actually get screen coordinates, we need to:

  • Move the torus somewhere in front of the viewer (the viewer is at the origin) — so we just add some constant to z to move it backward.
  • Project from 3D onto our 2D screen.

So we have another constant to pick, call it K2, for the distance of the donut from the viewer, and our projection now looks like:

K1 and K2 can be tweaked together to change the field of view and flatten or exaggerate the depth of the object.

Now, we could implement a 3x3 matrix multiplication routine in our code and implement the above in a straightforward way. But if our goal is to shrink the code as much as possible, then every 0 in the matrices above is an opportunity for simplification. So let’s multiply it out. Churning through a bunch of algebra (thanks Mathematica!), the full result is:

Well, that looks pretty hideous, but we we can precompute some common subexpressions (e.g. all the sines and cosines, and ) and reuse them in the code. In fact I came up with a completely different factoring in the original code but that’s left as an exercise for the reader. (The original code also swaps the sines and cosines of A, effectively rotating by 90 degrees, so I guess my initial derivation was a bit different but that’s OK.)

Now we know where to put the pixel, but we still haven’t even considered which shade to plot. To calculate illumination, we need to know the surface normal— the direction perpendicular to the surface at each point. If we have that, then we can take the dot product of the surface normal with the light direction, which we can choose arbitrarily. That gives us the cosine of the angle between the light direction and the surface direction: If the dot product is >0, the surface is facing the light and if it’s <0, it faces away from the light. The higher the value, the more light falls on the surface.

The derivation of the surface normal direction turns out to be pretty much the same as our derivation of the point in space. We start with a point on a circle, rotate it around the torus’s central axis, and then make two more rotations. The surface normal of the point on the circle is fairly obvious: it’s the same as the point on a unit (radius=1) circle centered at the origin.

So our surface normal (Nx, Ny, Nz) is derived the same as above, except the point we start with is just (cos θ, sin θ, 0). Then we apply the same rotations:

So which lighting direction should we choose? How about we light up surfaces facing behind and above the viewer: . Technically this should be a normalized unit vector, and this vector has a magnitude of √2. That’s okay – we will compensate later. Therefore we compute the above (x,y,z), throw away the x and get our luminance L = y-z.

Again, not too pretty, but not terrible once we’ve precomputed all the sines and cosines.

So now all that’s left to do is to pick some values for R1, R2, K1, and K2. In the original donut code I chose R1=1 and R2=2, so it has the same geometry as my cross-section diagram above. K1 controls the scale, which depends on our pixel resolution and is in fact different for x and y in the ASCII animation. K2, the distance from the viewer to the donut, was chosen to be 5.

I’ve taken the above equations and written a quick and dirty canvas implementation here, just plotting the pixels and the lighting values from the equations above. The result is not exactly the same as the original as some of my rotations are in opposite directions or off by 90 degrees, but it is qualitatively doing the same thing.

Here it is: toggle animation

It’s slightly mind-bending because you can see right through the torus, but the math does work! Convert that to an ASCII rendering with z-buffering, and you’ve got yourself a clever little program.

Now, we have all the pieces, but how do we write the code? Roughly like this (some pseudocode liberties have been taken with 2D arrays):

const float theta_spacing = 0.07;
const float phi_spacing = 0.02;
const float R1 = 1;
const float R2 = 2;
const float K2 = 5;
// Calculate K1 based on screen size: the maximum x-distance occurs roughly at
// the edge of the torus, which is at x=R1+R2, z=0. we want that to be
// displaced 3/8ths of the width of the screen, which is 3/4th of the way from
// the center to the side of the screen.
// screen_width*3/8 = K1*(R1+R2)/(K2+0)
// screen_width*K2*3/(8*(R1+R2)) = K1
const float K1 = screen_width*K2*3/(8*(R1+R2));
render_frame(float A, float B) {
 // precompute sines and cosines of A and B
 float cosA = cos(A), sinA = sin(A);
 float cosB = cos(B), sinB = sin(B);
 char output[0..screen_width, 0..screen_height] = ' ';
 float zbuffer[0..screen_width, 0..screen_height] = 0;
 // theta goes around the cross-sectional circle of a torus
 for(float theta=0; theta < 2*pi; theta += theta_spacing) {
 // precompute sines and cosines of theta
 float costheta = cos(theta), sintheta = sin(theta);
 // phi goes around the center of revolution of a torus
 for(float phi=0; phi < 2*pi; phi += phi_spacing) {
 // precompute sines and cosines of phi
 float cosphi = cos(phi), sinphi = sin(phi);
 // the x,y coordinate of the circle, before revolving (factored out of the above equations)
 float circlex = R2 + R1*costheta;
 float circley = R1*sintheta;
 // final 3D (x,y,z) coordinate after rotations, directly from our math above
 float x = circlex*(cosB*cosphi + sinA*sinB*sinphi) - circley*cosA*sinB; 
 float y = circlex*(sinB*cosphi - sinA*cosB*sinphi) + circley*cosA*cosB;
 float z = K2 + cosA*circlex*sinphi + circley*sinA;
 float ooz = 1/z; // "one over z"
 // x and y projection. note that y is negated here, because y goes up in
 // 3D space but down on 2D displays.
 int xp = (int) (screen_width/2 + K1*ooz*x);
 int yp = (int) (screen_height/2 - K1*ooz*y);
 // calculate luminance. ugly, but correct.
 float L = cosphi*costheta*sinB - cosA*costheta*sinphi - sinA*sintheta + 
 cosB*(cosA*sintheta - costheta*sinA*sinphi);
 // L ranges from -sqrt(2) to +sqrt(2). If it's < 0, the surface is
 // pointing away from us, so we won't bother trying to plot it.
 if(L>0) {
 // test against the z-buffer. larger 1/z means the pixel is closer to
 // the viewer than what's already plotted.
 if(ooz > zbuffer[xp,yp]) {
 zbuffer[xp,yp] = ooz;
 int luminance_index = L*8; // this brings L into the range 0..11 (8*sqrt(2) = 11.3)
 // now we lookup the character corresponding to the luminance and plot it in our output:
 output[xp,yp] = ".,-~:;=!*#$@"[luminance_index];
 }
 }
 }
 }
 // now, dump output[] to the screen.
 // bring cursor to "home" location, in just about any currently-used terminal
 // emulation mode
 printf("\x1b[H");
 for(int j=0;j<screen_height;j++) {
 for(int i=0;i<screen_width;i++) {
 putchar(output[i,j]);
 }
 putchar('\n');
 }
}

The Javascript source for both the ASCII and canvas rendering is right here.

permalink | | a1k0n.net

#Emacs, naked

Lenovo Agrees to Buy IBM Server Business for $2.3 Billion

Another Google Privacy Flaw – Calendar Unexpectedly Leaks Private Information (Disclosed) ← Terence Eden's Blog

$
0
0

Comments:"Another Google Privacy Flaw – Calendar Unexpectedly Leaks Private Information (Disclosed) ← Terence Eden's Blog"

URL:http://shkspr.mobi/blog/2014/01/another-google-privacy-flaw/


My wife likes to set reminders for herself in Google Calendar.


Recently, she added a note to her personal Google Calendar reading "Email [email protected] to discuss pay rise" and set the date for a few months from now. She'd had a discussion with her boss, Alice, and they'd agreed to talk about salary later in the year.

A few moments later, Alice sent her a "Meeting Accepted" email.

What... The...?

Although pretty embarrassing, it could have been a lot worse. It could have been "Email [email protected] with excuse why we can't see her" or perhaps "Email [email protected] with divorce details" or even "Email [email protected] to demand red stapler back" or... well, you get the picture.

Luckily, my wife doesn't have a Google+ profile, so there was no information leak other than her email address (which wasn't "huggle.wuggle.2012" or anything daft like that!)

We've tried several times to recreate this behaviour. Here's what we discovered:

  • If you use Google Calendar on the web and put a Gmail address in the subject line, that user will have the event added to the calendar.
  • They will not receive an email notification - although they will get a "meeting reminder" pop-up.
  • Creating an event on an Android phone does not trigger a meeting request.
  • Some non-Gmail addresses will also see the meeting in their calendar - but others will not.
  • When you delete a calendar item, the "Cancellation" notification is emailed regardless of whether the user received the original invite.


We were unable to determine which non-Gmail addresses would receive the item in their calendar. Some which were hosted with Google didn't receive the pseudo-invitation. Some accounts hosted on Microsoft Exchange got the invite while others on seemingly similar systems didn't.

Here's a video showing it in action.

Note that when a user fills in the pop-up, Google Calendar asks for confirmation to send a meeting invite. When using the full interface, no warning whatsoever is given.

Impact

Google has tried to be clever here. It has failed. Just because I am talking about someone, it doesn't mean I am talking to someone.

There are two main risks here - the user could expose her private Gmail account and associated Google+ data, and she could also reveal her private thoughts and feelings.

Google really needs to work harder at protecting the privacy of its users.

Disclosure

This privacy issue was formally disclosed to Google on 6th January 2014.
On 22nd January, they responded by saying they didn't consider it a problem.

We reviewed your report. After careful consideration by our security team, we feel that the issue has minimal impact on the security of our users. Let us know if you believe that this determination may be incorrect. If you'd submitted your report as part of our reward program, this means it doesn't qualify for reward or credit. Thanks for your help!

As much as I'm disappointed not to be getting a $10,000 bug bounty, I'm more upset that Google repeatedly finds itself failing to keep its users' private information private.

Update: according to a comment on the HackerNews discussion - problems like this have been reported to Google as far back as 2010.

Like this:

LikeLoading...

My wife likes to set reminders for herself in Google Calendar. Recently, she added a note to her personal Google Calendar reading "Email [email protected] to discuss pay rise" and set...

'Google outed me' | ZDNet

$
0
0

Comments:"'Google outed me' | ZDNet"

URL:http://www.zdnet.com/google-outed-me-7000025416/


If you haven't heard about it by now, last Wednesday, ESPN's Grantland website published an article called Dr. V’s Magical Putter by Caleb Hannan. It was supposed to be a profile about a golf club, but instead its purpose - and dramatic climax - was to out the club's inventor as transgender.

The inventor’s name was Dr. Essay Anne Vanderbilt. She had agreed to be the subject of the story reluctantly, and only if Hannan wrote about "the science, not the scientist."

As Hannan investigated Vanderbilt, he found out that her academic background didn't add up, and he also learned that she was transgender.

Upon learning this, Hannan told her he was going to break the agreement not to write about her personal life and reveal her transgender status without her consent. He then outed her as transgender to her investors and colleagues, and went forward with an article that was intended to out her online, and to the world.

After being outed to her colleagues, and before the article was published, Dr. Essay Anne Vanderbilt killed herself.

It reminded me about another transgender woman who was recently outed without her consent - by Google Plus.

A woman was using her old (male) name at work, and when her Android phone updated to KitKat - with Google+ integrating chat and SMS into "hangouts" - this is what happened when she texted a coworker:

(ICYMI earlier: KitKat did indeed out me to a coworker. I am freaking out.) — Erika Sorensen (@eiridescent) January 3, 2014 Somehow I didn't think through the potential consequences of Google+ embedding itself ever deeper into stock Android stuff — Erika Sorensen (@eiridescent) January 3, 2014

Google's response was that her outing was "user error" - Google blamed her, the user for not understanding the new, confusing integration.

ESPN Grantland editor Bill Simmons issued a 2,700-word statement where he lamented twice for failing his writer Hannan, but never once expressed concern for failing Ms. Vanderbilt.

One could argue that ESPN may not have caused Ms. Vanderbilt's suicide, but its actions in outing her have been acknowledged by ESPN itself and the general public as having played an active, key role in her death.

(Grantland founder Bill Simmons has since posted a letter to his readers apologizing for outing her saying, "I don’t think [Hannan] understood the moral consequences of that decision, and frankly, neither did anyone working for Grantland."

Vanderbilt did not want to be out. She wanted to blend in. And though some have been quick to point out that her world would have crumbled had she not been outed as trans, but simply found out as having made up her credentials - those people aren't familiar with the world of sports entrepreneurialism.

Vanderbilt would have merely joined the ranks of sports entrepreneurs who got caught changing their background. Michael Vick, Kevin Hart, Tim Johnson, Nick Saban, Miguel Tijada, Manti Te'o, Rosie Ruiz and George O'Leary are just a few.

And like the people in that list, Vanderbilt would have been busted but if her product or performance was great on its own merit, she would have recovered.

Instead, she was outed as a transgender woman to someone she works with, and before she was to be outed to the world, rather than go through this hell all over again, she took her own life.

Really pissed right now. Fucking Google. I was NOT ready to tell any of my coworkers yet. — Erika Sorensen (@eiridescent) January 3, 2014 I'm just glad I live in a state where it'd be illegal to fire me. — Erika Sorensen (@eiridescent) January 3, 2014

Since the release of the latest mobile software Android 4.4, codenamed KitKat, the instant messaging app Hangouts has become the default text-messaging app on phones and tablets running with the newly installed operating system.

But Sorenson wasn't the only transgender person made unsafe by Google+ in Google's ruthless objective to use Android for reorganizing peoples' lives to suit Google's bottom line.

Four days later on January 7, transgender Android user Zoe posted to Google Product Forums > Hangouts that she now needed to change her name and gender display. She did not receive a response.

The same day (and While Ms. Sorensen was waiting fretfully for her employer's HR person to return to work) Android user Nora posted "Legal name instead of actual desired/registered name shows up in Hangout History" to Google Product Forums > Google Chat:

I'm transgender... this account was registered using my preferred name, Nora, but when I look at hangout histories, certain locations on my android phone, and a few other places, I see my legal name popping up instead. I don't remember actually giving this detail to Google, nor can I find anywhere within the settings where anything other than "Nora" is listed... I don't know how many other people can see me listed as such, but it's really kind of unpleasant and outing, and a bit triggering really... There're trans ppl for whom this tech flaw would get them fired, w/cascading consequences. There're trans youth who'd be outed to parents. — Erika Sorensen (@eiridescent) January 9, 2014 Many trans ppl lose everything or nearly so when they come out. This could've been utterly disastrous. — Erika Sorensen (@eiridescent) January 9, 2014

The issue with identity and Google+ Hangouts overwriting people's Gmail and SMS contacts has been trans-unfriendly since its rollout. One woman worried about the privacy of her transgender sister's identity wrote in Google's Forums (Gmail),

My sister is transgendered and has yet to legally switch to female, and because of this has yet to change her name on her Google+ since she has professional contacts on her page. (...) Now that I have used the video chat option on Hangouts, everything is reverting back to her old name.

She did not receive a response.

After Google called Erika Sorensen's outing "user error" writer Lexi Cannes commented on the matter January 8, saying "Google is facing increased complaints that they are dismissive of privacy errors triggered by upgrades and other changes. Transgender issues with Google began the day Google+ was launched."

In my eyes, finding this distraught post from a Google+ user one month after Plus launched brought Cannes' comments and Ms. Vanderbilt's suicide full circle:

I am FTM transgender, and outside of this channel (which is meant to be detached from my personal, real-life acquaintances) I have not come out yet. [the way FTM people are treated when they come out] sickens me and has pushed me farther and farther into the closet to the point where I fear I will not be able to get out before I end up killing myself out of stress. When I opened up my youtube page today, I was greeted by my birth name, the one that people know and call me by in my outside life, attached to the google+ connection bar just under my profile picture. (...) I frantically searched through the google+ page and the youtube settings and found no option to remove connections. Eventually, I was forced to delete my entire google+ account, hoping that would at least remove my legal name from my home page... But it DIDN'T. My youtube home page still displays that information loud and clear, even though I DE-ACTIVATED MY ACCOUNT. This does not make me feel safe. I feel like my personal privacy as a human being has been stolen from me. So please, if anyone can tell me how to permanently remove google+ and facebook connections from the new page, it would mean so much to me. I want to be a part of a safe community.

This, Ms. Vanderbilt might have said. What he said.

Still. Ever-shrinking privacy, "real name" policies, etc. aren't just abstract civil liberties issues. Trans ppl disproportionately harmed. — Erika Sorensen (@eiridescent) January 3, 2014

On some level, I want to imagine that Google will fix this.

I don't want to think that controlling our own identities doesn't matter to Google; or it's as if to Google we are the faulty parts of its machine. Or we are Google Plus with a body vaguely attached. Or to Google, the problems are our own faults, and any calls for respect or privacy in a painful world are just annoying to Google, which has better things to do, like terrify us with the privacy nightmare of Google Glass and making bulk data consolidators' jobs of cataloging our personally identifying information easier.

Commenting on Dr. V’s Magical Putter, writer Max Potter was quoted on Nieman Storyboard saying,

I think that piece is emblematic of so much of what I think is wrong with what’s happening in journalism today. We’ve got journalism and journalists struggling more than ever before to make a name and a living, and thereby more and more pressure on landing an amazing story. We’ve got less and less staff and experience, fewer and fewer “adults” around, more and more talented kids desperate to make a name and very little mentoring. And, seems to me, we still have this (white) male dominated journalism elite, with their myopic, pseudo-macho ideas of what truth and the pursuit of it means. And … this is what we get.

If this is what we get, then Google's little Plus project is a loaded gun pointed right at anyone whose privacy is what keeps them alive.

ZDNet has emailed Google for comment and will update this article if it responds.

Tarkovsky Films Now Free Online | Open Culture

$
0
0

Comments:"Tarkovsky Films Now Free Online | Open Culture"

URL:http://www.openculture.com/2010/07/tarkovksy.html


Andrei Tarkovsky (1932-1986) firmly positioned himself as the finest Soviet director of the post-War period. But his influence extended well beyond the Soviet Union.  The Cahiers du cinéma consistently ranked his films on their top ten annual lists. Ingmar Bergman went so far as to say, “Tarkovsky for me is the greatest [director], the one who invented a new language, true to the nature of film, as it captures life as a reflection, life as a dream.” And Akira Kurosawa acknowledged his influence too, adding, “I love all of Tarkovsky’s films. I love his personality and all his works. Every cut from his films is a marvelous image in itself.”

Shot between 1962 and 1986, Tarkovsky’s seven feature films often grapple with metaphysical and spiritual themes, using a distinctive cinematic style. Long takes, slow pacing and metaphorical imagery – they all figure into the archetypical Tarkovsky film. (Watch the scene from Stalker above.)

You can now watch Tarkovsky’s films online – for free. Each film is listed in our collection of Free Online Movies, but here you can access each major film in the order in which they were made. Most all of the films below were placed online by Mosfilm, the largest and oldest studion in Russia.

NOTE: if you access the films via YouTube, be sure to click “CC” at the bottom of the videos to access the subtitles.

Don’t forget to follow us on Twitter, Facebook and now Google+, and add a pinch of culture to your daily social media diet.

Related Content:

The Masterful Polaroid Pictures Taken by Filmmaker Andrei Tarkovsky

Tarkovsky’s Advice to Young Filmmakers: Sacrifice Yourself for Cinema

A Poet in Cinema: Andrei Tarkovsky Reveals the Director’s Deep Thoughts on Filmmaking and Life



Killing the Crunch Mode Antipattern - Chad Fowler

Watchdog Report Says N.S.A. Program Is Illegal and Should End

How I found a Remote Code Execution bug affecting Facebook's servers

$
0
0

Comments:"How I found a Remote Code Execution bug affecting Facebook's servers"

URL:http://www.ubercomp.com/posts/2014-01-16_facebook_remote_code_execution


XXE in OpenID: one bug to rule them all, or how I found a Remote Code Execution flaw affecting Facebook's servers

Today I want to share a tale about how I found a Remote Code Execution bug affecting Facebook. Like all good tales, the beginning was a long time ago (actually, just over a year, but I count using Internet Time, so bear with me). If you find this interesting and want to hire me to do a security focused review or penetration testing in your own (or your company's) code, don't hesitate to send me an email.

September 22nd, 2012 was a very special day for me, because it was the day I found a XML External Entity Expansion bug affecting the part of Drupal that handled OpenID. XXEs are very nice. They allow you to read any files on the filesystem, make arbitrary network connections, and just for the kicks you can also DoS the server with the billion laughs attack.

I was so naive at the time that I didn't even bother to check if anyone else was vulnerable. I reported it immediately. I wanted to start putting CVEs on my resume as soon as possible, and this would be the first (it eventually gotCVE-2012-4554 assigned to it). Only five days later it occurred to me that OpenID was pretty heavily used and so maybe other places were vulnerable as well. I decided to check the StackOverflow login form. Indeed, it was vulnerable to the whole thing (file reading and all).

Then I decided to try to find OpenID handling code running inside Google's servers. I wasn't able to read files or open network connections, but both App Engine and Blogger were vulnerable to DoS. This is how I got my first bounty from Google, by the way. It was a US$ 500 bounty.

After reporting the bug to Google, I ran some more tests and eventually noticed that the bug I had in my hands was affecting a lot of implementations. I won't enumerate the libraries here, but let me just say that this single bug affected, in one way or another, libraries implemented in Java, C#, PHP, Ruby, Python, Perl, and then more... The only reason I'm not publishing the PoC here is that there are a lot of servers who are still vulnerable out there. Of course, the people who know about security will just read OpenID and XXE and then write an exploit in about 5 minutes, but I digress.

So after contacting (or trying to contact) every OpenID library author out there, I decided to write to the member-only security list hosted at the OpenID foundation an email titled "One bug to rule them all: many implementations of OpenID are vulnerable to XXE" to share my findings. I figured most library authors would be members of that list and so patches would be released for everyone very soon. I was right, but only partially.

The persistent readers who are still with me by now are thinking: what does a Facebook Remote Code Execution bug has to do with all this? Well, I knew Facebook allowed OpenID login in the past. However, when I first found the OpenID bug in 2012 I couldn't find any endpoint that would allow me to enter an arbitrary OpenID URL. From a Google search I knew that in the past you could do something like https://www.facebook.com/openid/consumer_helper.php?openid.mode=checkid_setup&user_claimed_id=YOUR_CLAIMED_ID_HERE&context=link&request_id=0&no_extensions=false&third_party_login=false, but now the consumer_helper.php endpoint is gone. So for more than a year I thought Facebook was not vulnerable at all, until one day I was testing Facebook's Forgot your password? functionality and saw a request to https://www.facebook.com/openid/receiver.php.

That's when I began to suspect that Facebook was indeed vulnerable to that same XXE I had found out more than a year ago. I had to work a lot to confirm this suspicion, though. Long story short, when you forget your password, one of the ways you can prove to Facebook that you own an @gmail.com account is to log into your Gmail and authorize Facebook to get your basic information (such as email and name). The way this works is you're actually logging into Facebook using your Gmail account, and this login happens over OpenID. So far, so good, but this is where I got stuck. I knew that, for my bug to work, the OpenID Relying Party (RP - Facebook) has to make a Yadis discovery request to an OpenID Provider (OP) under the attacker's control. Let's say http://www.ubercomp.com/. Then my malicious OP will send a response with the rogue XML that will then be parsed by the RP, and the XXE attack will work.

Since the initial OpenID request (a redirect from Facebook to Google) happens without my intervention, there was no place for me to actually enter an URL under my control that was my OpenID identifier and have Facebook send a Yadis Discover request to that URL. So I thought the bug would not be triggered at all, unless I could somehow get Google to send Facebook a malicious XML, which was very unlikely. Fortunately, I was wrong. After a more careful reading of theOpenID 2.0 Specification, I found this nice gem in session 11.2 - Verifying Discovered Information:

"If the Claimed Identifier was not previously discovered by the Relying Party (the "openid.identity" in the request was "http://specs.openid.net/auth/2.0/identifier_select" or a different Identifier, or if the OP is sending an unsolicited positive assertion), the Relying Party MUST perform discovery on the Claimed Identifier in the response to make sure that the OP is authorized to make assertions about the Claimed Identifier".

I checked and, indeed, the openid.identity in the request washttp://specs.openid.net/auth/2.0/identifier_select. This is a very common practice, actually. So indeed after a few minutes I was able to make a request to https://www.facebook.com/openid/receiver.php that caused Facebook to perform a Yadis discovery on a URL under my control, and the response to that request would contain malicious XML. I knew I had a XXE because when I told Facebook's server to open /dev/random, the response would never come and eventually a request killer would kick in after a few minutes. But I still couldn't read any file contents. I tried everything on the XXE back of tricks (including weird combinations involving parameter entities, but nothing. I then realized I had a subtle bug on my exploit that, fixed that, and then...

$ bash exploit.sh
* About to connect() to www.facebook.com port 80 (#0)
* Trying 31.13.75.1... connected
* Connected to www.facebook.com (31.13.75.1) port 80 (#0)> GET /openid/receiver.php?provider_id=1010459756371&context=account_recovery&protocol=http&request_id=1&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=...(redacted)... HTTP/1.1> Host: www.facebook.com> Accept: */*> User-Agent: Chrome>

That's right, the response contained Facebook's /etc/passwd. Now we were going somewhere. By then I knew I had found the keys to the kingdom. After all, having the ability to read (almost) any file and open arbitrary network connections through the point of view of the Facebook server, and which don't go through any kind of proxy was surely something Facebook wanted to avoid at any cost. But I wanted more. I wanted to escalate this to a full Remote Execution.

A lot of bug bounty programs around the web have a rule that I think is very sensible: whenever you find a bug, don't linger on messing around. Report the bug right away and the security team will consider the worst case scenario and pay accordingly. However, I didn't have much experience with the security team at Facebook and didn't know if they would consider my bug as a Remote Code Execution or not. I Since I didn't want to cause the wrong impressions, I decided I would report the bug right away, ask for permission to try to escalate it to a RCE and then work on it while it was being fixed. I figured that would be ok because most bugs take a long time to be processed, and so I had plenty of time to try to escalate to an RCE while still keeping the nice imaginary white hat I have on my head. So after writing the bug report I decided to go out and have lunch, and the plan was to continue working when I came back.

However, I was wrong again. Since this was a very critical bug, when I got back home from lunch, a quick fix was already in place. Less than two hours after the initial report was sent. Needless to say, I was very impressed and disappointed at the same time, but since I knew just how I would escalate that attack to a Remote Code Execution bug, I decided to tell the security team what I'd do to escalate my access and trust them to be honest when they tested to see if the attack I had in my mind worked or not. I'm glad I did that. After a few back and forth emails, the security team confirmed that my attack was sound and that I had indeed found a RCE affecting their servers.

So this is how the first high impact bug I ever found was the entry point for an attack that probably got one of the highest payouts of any web security bug bounty program. Nice, huh?

Timeline

All timestamps are in GMT. I omitted a few unimportant interactions about the acknowledgements page and such.

  • 2013-11-19 3:51 pm: Initial report
  • 2013-11-19 5:37 pm: Bug acknowledged by security team member Godot
  • 2013-11-19 5:46 pm: I replied by sending a PoC to read arbitrary files
  • 2013-11-19 7:31 pm: Security team member Emrakul informed me that a short term fix was already in place and would be live in approximately 30 minutes
  • 2013-11-19 8:27 pm: I replied confirming that the bug was patched.
  • 2013-11-21 8:03 pm: Payout set. The security team informed me it was their biggest bounty payout to date.
  • 2013-11-22 2:13 am: I sent an email asking whether the security team had already considered the bug as RCE or just as a file disclosure.
  • 2013-11-23 1:17 am: Security team replied that they did not considered the attack could be escalated to RCE.
  • 2013-11-23 7:54 pm: I sent an email explaining exactly how the attack could be escalated to an RCE (with file paths, example requests and all).
  • 2013-11-24 9:23 pm: Facebook replied that my attack worked and they'd have to work around it.
  • 2013-12-03 4:45 am: Facebook informed me that the longer term fix was in place and that they'd soon have a meeting to discuss a new bounty amount
  • 2013-12-03 7:14 pm: I thanked them and said I'd cross my fingers
  • 2013-12-13 1:04 pm: I found a Bloomberg article quoting Ryan McGeehan, who managed Facebook's incident response unit, saying that "If there's a million dollar bug, we will pay it out" and asked if there was any news.
  • 2013-12-30 4:45 am: Facebook informed me that, since the bug was now considered to be RCE, the payout would be higher. I won't disclose the amount, but if you have any comments about how much you think this should be worth, please share them. Unfortunately, I didn't get even close to the one-million dollar payout cited above.

Facebook will lose 80% of users by 2017, say Princeton researchers | Technology | The Guardian

$
0
0

Comments:" Facebook will lose 80% of users by 2017, say Princeton researchers | Technology | The Guardian "

URL:http://www.theguardian.com/technology/2014/jan/22/facebook-princeton-researchers-infectious-disease


Bubonic plague bacteria. Scientists argue that, like bubonic plague, Facebook will eventually die out. Photograph: AFP/Getty Images

Facebook has spread like an infectious disease but we are slowly becoming immune to its attractions, and the platform will be largely abandoned by 2017, say researchers at Princeton University.

The forecast of Facebook's impending doom was made by comparing the growth curve of epidemics to those of online social networks. Scientists argue that, like bubonic plague, Facebook will eventually die out.

The social network, which celebrates its 10th birthday on 4 February, has survived longer than rivals such as Myspace and Bebo, but the Princeton forecast says it will lose 80% of its peak user base within the next three years.

John Cannarella and Joshua Spechler, from the US university's mechanical and aerospace engineering department, have based their prediction on the number of times Facebook is typed into Google as a search term. The charts produced by the Google Trends service show Facebook searches peaked in December 2012 and have since begun to trail off.

"Ideas, like diseases, have been shown to spread infectiously between people before eventually dying out, and have been successfully described with epidemiological models," the authors claim in a paper entitled Epidemiological modelling of online social network dynamics.

"Ideas are spread through communicative contact between different people who share ideas with each other. Idea manifesters ultimately lose interest with the idea and no longer manifest the idea, which can be thought of as the gain of 'immunity' to the idea."

Facebook reported nearly 1.2 billion monthly active users in October, and is due to update investors on its traffic numbers at the end of the month. While desktop traffic to its websites has indeed been falling, this is at least in part due to the fact that many people now only access the network via their mobile phones.

For their study, Cannarella and Spechler used what is known as the SIR (susceptible, infected, recovered) model of disease, which creates equations to map the spread and recovery of epidemics.

They tested various equations against the lifespan of Myspace, before applying them to Facebook. Myspace was founded in 2003 and reached its peak in 2007 with 300 million registered users, before falling out of use by 2011. Purchased by Rupert Murdoch's News Corp for $580m, Myspace signed a $900m deal with Google in 2006 to sell its advertising space and was at one point valued at $12bn. It was eventually sold by News Corp for just $35m.

The 870 million people using Facebook via their smartphones each month could explain the drop in Google searches – those looking to log on are no longer doing so by typing the word Facebook into Google.

But Facebook's chief financial officer David Ebersman admitted on an earnings call with analysts that during the previous three months: "We did see a decrease in daily users, specifically among younger teens."

Investors do not appear to be heading for the exit just yet. Facebook's share price reached record highs this month, valuing founder Mark Zuckerberg's company at $142bn.

Facebook billionaire

When Facebook shares hit their peak in New York this week, it meant Sheryl Sandberg's personal fortune ticked over $1bn (£600m), making her one of the youngest female billionaires in the world.

According to Bloomberg, the 44-year-old chief operating officer of the social network owns about 12.3m shares in the company, which closed at $58.51 (£35) on Tuesday in New York, although they fell back below $58 on Wednesday. Her stake is valued at about $750m.

Her fortune has risen rapidly since last August, when she sold $91m of shares and was estimated to be worth $400m.

Sandberg has collected more than $300m from selling shares since the company's 2012 initial public offering, and owns about 4.7m stock options that began vesting last May.

"She was brought in to figure out how to make money," David Kirkpatrick, author of The Facebook Effect, a history of the company, told Bloomberg. "It's proving to be one of the greatest stories in business history."

Sandberg's rise in wealth mirrors her broadening role on the global stage. The Harvard University graduate and one-time chief of staff for former Treasury secretary Lawrence Summers is a donor to President Barack Obama, sits on the board of Walt Disney Co, and wrote the book Lean In. She will be discussing gender issues with IMF boss Christine Lagarde at Davos on Saturday.

Chrome Bugs Lets Sites Listen to Your Private Conversations

$
0
0

Comments:"Chrome Bugs Lets Sites Listen to Your Private Conversations"

URL:http://talater.com/chrome-is-listening/


While we’ve all grown accustomed to chatting with Siri, talking to our cars, and soon maybe even asking our glasses for directions, talking to our computers still feels weird. But now, Google is putting their full weight behind changing this. There’s no clearer evidence to this, than visiting Google.com, and seeing a speech recognition button right there inside Google’s most sacred real estate - the search box.

Yet all this effort may now be compromised by a new exploit which lets malicious sites turn Google Chrome into a listening device, one that can record anything said in your office or your home, as long as Chrome is still running.

Check out the video, to see the exploit in action

Google’s Response

I discovered this exploit while working on annyang, a popular JavaScript Speech Recognition library. My work has allowed me the insight to find multiple bugs in Chrome, and to come up with this exploit which combines all of them together.

Wanting speech recognition to succeed, I of course decided to do the right thing…

I reported this exploit to Google’s security team in private on September 13. By September 19, their engineers have identified the bugs and suggested fixes. On September 24, a patch which fixes the exploit was ready, and three days later my find was nominated for Chromium’s Reward Panel (where prizes can go as high as $30,000.)

Google’s engineers, who’ve proven themselves to be just as talented as I imagined, were able to identify the problem and fix it in less than 2 weeks from my initial report.

I was ecstatic. The system works.

But then time passed, and the fix didn’t make it to users’ desktops. A month and a half later, I asked the team why the fix wasn’t released. Their answer was that there was an ongoing discussion within the Standards group, to agree on the correct behaviour - “Nothing is decided yet.”

As of today, almost four months after learning about this issue, Google is still waiting for the Standards group to agree on the best course of action, and your browser is still vulnerable.

By the way, the web’s standards organization, the W3C, has already defined the correct behaviour which would’ve prevented this… This was done in their specification for the Web Speech API, back in October 2012.

How Does it Work?

A user visits a site, that uses speech recognition to offer some cool new functionality. The site asks the user for permission to use his mic, the user accepts, and can now control the site with his voice. Chrome shows a clear indication in the browser that speech recognition is on, and once the user turns it off, or leaves that site, Chrome stops listening. So far, so good.

But what if that site is run by someone with malicious intentions?

Most sites using Speech Recognition, choose to use secure HTTPS connections. This doesn’t mean the site is safe, just that the owner bought a $5 security certificate. When you grant an HTTPS site permission to use your mic, Chrome will remember your choice, and allow the site to start listening in the future, without asking for permission again. This is perfectly fine, as long as Chrome gives you clear indication that you are being listened to, and that the site can’t start listening to you in background windows that are hidden to you.

When you click the button to start or stop the speech recognition on the site, what you won’t notice is that the site may have also opened another hidden popunder window. This window can wait until the main site is closed, and then start listening in without asking for permission. This can be done in a window that you never saw, never interacted with, and probably didn’t even know was there.

To make matters worse, even if you do notice that window (which can be disguised as a common banner), Chrome does not show any visual indication that Speech Recognition is turned on in such windows - only in regular Chrome tabs.

You can see the full source code for this exploit on GitHub.

Speech Recognition's Future

Speech recognition has huge potential for launching the web forward. Developers are creating amazing things, making sites better, easier to use, friendlier for people with disabilities, and just plain cool…

As the maintainer of a popular speech recognition library, it may seem that I shot myself in the foot by exposing this. But I have no doubt that by exposing this, we can ensure that these issues will be resolved soon, and we can all go back to feeling very silly talking to our computers… A year from now, it will feel as natural as any of the other wonders of this age.

Stripe CTF3: Distributed Systems

$
0
0

Comments:"Stripe CTF3: Distributed Systems"

URL:https://stripe.com/blog/ctf3-launch


Greg Brockman, January 22, 2014

We’re proud to launch Capture the Flag 3: Distributed Systems. Without further ado, you can now jump in and start playing. If you complete all the levels, we'll send you a special-edition Stripe CTF3 T-shirt.

For those seeking further ado: we’ve found that the best way to teach people to build good systems is by giving them hands-on experience with problems that even expert developers may only occasionally get the chance to solve. We’ve run twoprevious Capture the Flags, both of which were designed to be an interesting way to get hands-on experience with crafting vulnerabilities.

Problems that follow this pattern—interesting, educational, rarely encountered—occur in many places outside security though, and we've made Capture the Flag 3 focus on distributed systems. There are five levels, each one focused on a different problem in the field. In all cases, the problem is one you’ve likely read about many times but never had a chance to try out in practice.

If you’d like to see how others are doing, we have leaderboards (for those who’ve opted in). You can also create a leaderboard for your group or company if you’d like to compete against your friends. We have CTF community chat on IRC at irc://irc.stripe.com:+6697/#ctf (also available via our web client). If you'd rather use Twitter than IRC,#stripectf is the hashtag for the event.

Above all, we want you to have fun and hopefully learn something in the process. If you get lost, we’ve provided beginners’ guides for each level which should point you in the right direction.

CTF3 will run for a week (so until 11am Pacific on January 29th). Happy hacking!

4K, HDMI, and Deep Color « tooNormal

$
0
0

Comments:"4k, HDMI, and Deep Color « tooNormal"

URL:http://www.toonormal.com/2014/01/10/4k-hdmi-and-deep-color/


As of this writing (January 2014), there are 2 HDMI specifications that support 4K Video (3840×2160 16:9). HDMI 1.4 and HDMI 2.0. As far as I know, there are currently no HDMI 2.0 capable TVs available in the market (though many were announced at CES this week).

Comparing HDMI 1.4 (Black text) and 2.0 (Orange). Source: HDMI 2.0 FAQ

A detail that tends to be neglected in all this 4K buzz is the Chroma Subsampling. If you’ve ever compared an HDMI signal against something else (DVI, VGA), and the quality looked worse, one of the reasons is because of Chroma Subsampling (for the other reason, see xvYCC at the bottom of this post).

Chroma Subsampling is extremely common. Practically every video you’ve ever watched on a computer or other digital video player uses it. As does the JPEG file format. That’s why we GameDevs prefer formats like PNG that don’t subsample. We like our source data raw and pristine. We can ruin it later with subsampling or other forms of texture compression (DXT/S3TC).

In the land of Subsampling, a descriptor like 4:4:4 or 4:2:2 is used. Images are broken up in to 4×2 pixel cells. The descriptor says how much color (chroma) data is lost. 4:4:4 is the perfect form of Chroma Subsampling. Chroma Subsampling uses the YCbCr color space (sometimes called YCC) as opposed to the standard RGB color space.

Great subsampling diagram from Wikipedia, showing the different encodings mean

Occasionally the term “4:4:4 RGB” or just “RGB” is used to describe the standard RGB color space. Also note, Component Video cables, though they are colored red, green, and blue, are actually YPbPr encoded (the Analog version of YCbCr).

Looking at the first diagram again, we can make a little more sense of it.

In other words:

  • HDMI 1.4 supports 8bit RGB, 8bit 4:4:4 YCbCr, and 12bit 4:2:2 YCbCr, all at 24-30 FPS
  • HDMI 2.0 supports RGB and 4:4:4 in all color depths (8bit-16bit) at 24-30 FPS
  • HDMI 2.0 only supports 8bit RGB and 8bit 4:4:4 at 60 FPS
  • All other color depths require Chroma Subsampling at 60 FPS in HDMI 2.0
  • Peter Jackson’s 48 FPS (The Hobbit’s “High Frame Rate” HFR) is notably absent from the spec

Also worth noting, the most well supported color depths are 8bit and 12bit. The 12 bit over HDMI is referred to as Deep Color (as opposed to High Color).

The HDMI spec has supported only 4:4:4 and 4:2:2 since HDMI 1.0. As of HDMI 2.0, it also supports 4:2:0, which is available in HDMI 2.0′s 60 FPS framerates. Blu-ray movies are encoded in 4:2:0, so I’d assume this is why they added this.

All this video signal butchering does beg the question: Which is the better trade off? More color range per pixel, or more pixels with color channels?

I have no idea.

If I was to guess though, because TV’s aren’t right in front of your face like a Computer Monitor, I’d expect 4K 4:2:2 might actually be better. Greater luminance precision, with a bit of chroma fringing.

Some Plasma and LCD screens use something called Pentile Matrix arrangement of their red, green, and blue pixels.

The AMOLED screen of the Nexus One. A green for every pixel, but every other pixel is a blue, switching red/blue order every line. Not all AMOLED screens are Pentile. The Super AMOLED Plus screen found in the PS Vita uses a standard RGB layout

So even if we wanted more color fidelity per individual pixel, it may not be physically there.

Deep Color

Me, my latest graphics fascination is Deep Color. Deep Color is the marketing name for more than 8 bits per pixel of a color. It isn’t necessarily something we need in asset creation (not me, but some do want full 16bit color channels). But as we start running filters/shaders on our assets, stuff like HDR (but more than that), we end up losing the quality of the original assets as they are re-sampled to fit in to an 8bit RGB color space.

This can result in banding, especially in near flat color gradients.

From Wikipedia, though it’s possible the banding shown may be exaggerated

Photographers have RAW and HDR file formats for dealing with this stuff. We have Deep Color, in all its 30bit (10bpp), 36bit (12bpp) and 48bit (16bpp) glory. Or really, just 36bit (12bpp), but 48bit can be used as a RAW format if we wanted.

So the point of this nerding: An ideal display would be 4K, support 12bit RGB or 12bit YCbCr, at 60 FPS.

The thing is, HDMI 2.0 doesn’t support it!

Perhaps that’s fine though. Again, HDMI is a television spec. Most television viewers are watching video, and practically all video is 4:2:0 encoded anyway (which is supported by the HDMI 2.0 spec). The problem is gaming, where our framerates can reach 60FPS.

The HDMI 2.0 spec isn’t up to spec.

Again this is probably fine. The now-current generation of consoles, nobody is really pushing them as 4K machines anyway. Sony may have 4K video playback support, but most high end games are still targeting 1080p and even 720p. 4K is 4x the pixels of 1080p. I suppose it’s an advantage that 4K only supports 30FPS right now, meaning you only need to push 2x the data to be a “4K game”, but still.

HDMI Bandwidth is rated in Gigabits per second.

  • HDMI 1.0->1.2: ~4 Gb
  • HDMI 1.3->1.4: ~8 Gb
  • HDMI 2.0: ~14 Gb (NEW)

Not surprisingly, 4K 8bit 60FPS is ~12 Gb of data, and 30FPS is ~6 Gb of data. Our good friend 4K 12bit 60FPS though is ~18 Gb of data, well above the limits of HDMI 2.0.

To compare, Display Port.

  • DisplayPort 1.0 and 1.1: ~8 Gb
  • DisplayPort 1.2: ~17 Gb
  • DisplayPort 1.3: ~32 Gb (NEW)

They’re claiming 8K and 4K@120Hz (FPS) support with the latest standard, but 18 doesn’t divide that well in to 32, so somebody has to have their numbers wrong (admittedly I did not divide mine by 1024, but 1000). Also since 8k is 4x the resolution of 4K, and the bandwidth only roughly doubled, practically speaking DisplayPort 1.3 can only support 8k 8bit 30FPS. Also that 4K@120Hz is 4K 8bit 120FPS. Still, if you don’t want 120FPS, that leaves room for 4K 16bit 60FPS, which should be more than needed (12bit). I wonder if anybody will support 4K 12bit 90FPS over DisplayPort?

And that’s 4K.

1080p and 2K Deep Color

Today 1080p is the dominant high resolution: 1920×1080. To the film guys, true 2K is 2048×1080, but there are a wide variety of devices in the same range, such as 2560×1600 and 2560×1440 (4x 720p). These, including 1080p, are often grouped under the label 2K.

A second of 1080p 8bit 60FPS data requires ~3 Gb of bandwidth, well within the range supported by the original HDMI 1.0 spec (though why we even had to deal with 1080i is a good question, probably due to the inability to even meet the HDMI spec).

To compare, a second of 1080p 12bit 60FPS data requires ~4.5 Gb of bandwidth. Even 1080p 16bit 60FPS needed only ~6 Gb, well within the range supported by HDMI 1.3 (where Deep Color was introduced). Plenty of headroom still. Only when we push 2560×1440 12bit 60FPS (~8 Gb) do we hit the limits of HDMI 1.3.

So from a specs perspective, I just wanted to note this because Deep Color and 1080p are reasonable to support on now-current generation game consoles. Even the PlayStation 3, by specs, supported this. High end games probably didn’t have enough processing to spare for this, but it’s something to definitely consider supporting on PlayStation 4 and Xbox One. As for PC, many current GPUs support Deep Color in full-screen resolutions. Again, full-screen, not necessarily on your Desktop (i.e. windowed). From what I briefly read, Deep Color is only supported on the Desktop with specialty cards (FirePro, etc).

One more thing: YCrCb (YCC) and xvYCC

You make have noticed watching a video file that the blacks don’t look very black.

Due to a horrible legacy thing (CRT displays), data encoded as YCrCb use values from 16->240 (15-235?) instead of 0->255. Thats quite the loss, nearly 12% of the available data range, effectively lowering the precision below 8bit. The only reason it’s still done is because of old CRT televisions, which can be really tough to find these days. Regrettably, that does mean both of the original DVD and Bluray movies standards were forced to comply to this.

Sony proposed x.v.Color (xvYCC) as a way of finally forgetting this stupid limitation of old CRT displays, and using the full 0->255 range. As of HDMI 1.3 (June 2006), xvYCC and Deep Color are part of the HDMI spec.

Several months later (November 2006), The PlayStation 3 was launched. So as a rule of thumb, only HDMI devices newer than the PlayStation 3 will could potentially support xvYCC. This means televisions, audio receivers, other set top boxes, etc. It’s worth noting that some audio receivers may actually clip video signals to the 16-240 range, thus ruining picture quality of an xvYCC source. Also the PS3 was eventually updated to HDMI 1.4 via a software update, but the only 1.4 feature supported is Stereoscopic 3D.

Source. Wikipedia.

The point of bringing this up is to further emphasize the potential for color banding and terrible color reproduction over HDMI. An 8bit RGB framebuffer is potentially being compressed to fit within the YCbCr 16-240 range before it gets sent over HDMI. The PlayStation 3 has a setting for enabling the full color range (I forget the name used), and other new devices probably do to (unlikely named xvYCC).

According to Wikipedia, all of the Deep Color modes supported by HDMI 1.3 are xvYCC, as they should be.

This entry was posted on Friday, January 10th, 2014 at 5:16 PM and is filed under Technobabble. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.


TCP backdoor 32764 or how we could patch the Internet (or part of it ;))

$
0
0

Comments:"TCP backdoor 32764 or how we could patch the Internet (or part of it ;))"

URL:http://blog.quarkslab.com/tcp-backdoor-32764-or-how-we-could-patch-the-internet-or-part-of-it.html


Eloi Vanderbéken recently found a backdoor on some common routers, which is described on its GitHub here. Basically, a process that listens on the 32764 TCP port runs, sometimes accessible from the WAN interface. We scanned the v4 Internet to look for the routers that have this backdoor wild open, and gathered some statistics about them. We will also present a way to permanently remove this backdoor on Linksys WAG200N routers.

Note that despite this backdoor allows a free access to many hosts on the Internet, no patch is available as it is not maintained anymore. So we thought about some tricks combined with our tools to imagine how to fix that worldwide.

This backdoor doesn't have any kind of authentication and allows various remote commands, like:

  • remote root shell
  • NVRAM configuration dump: Wifi and/or PPPoE credentials can be extracted for instance
  • file copy

Let's see how many routers are still exposed to this vulnerability, and propose a way to remove this backdoor.

Looking for the backdoor on the Internet

We first used masscan to look for hosts with TCP port 32764 open. We ended up with about 1 million IPv4s . The scan took about 50 hours on a low-end Linux virtual server.

Then, we had to determine whether this was really the backdoor exposed, or some other false positive.

Eloi's POC shows a clear way to do this:

  • Send a packet to the host device
  • Wait for an answer with 0x53634D4D (or 0x4D4D6353, according to the endianness of the device, see below)
  • In such a case, the backdoor is here and accessible.

In order to check the IPs previously discovered, we couldn't use masscan or a similar tool (as they don't have any "plugin" feature). Moreover, sequentially establishing a connection to each IP to verify that the backdoor is present would take ages. For instance, with a 1 second timeout, the worst case scenario is 1 million seconds (about 12 days) and even if half the hosts would answer "directly", it would still run for 6 days. Quick process-based parallelism could help and might divide this time by 10/20. It still remains a lot and is not the good way to do this.

We thus decided to quickly code a scanner based on asynchronous sockets, that will check the availability of the backdoor. The advantage of asynchronous sockets are that lots of them (about 30k in our tests) can be managed at the same time, thus managing 30k hosts in parallel. This kind of parallelism couldn't be achieved with a classical process (or thread)-based parallelism.

This asynchronous model is somehow the same used by masscan and zmap, apart that they bypass sockets to directly emit packets (and thus manage more hosts simultaneously).

Using a classical Linux system, a first implementation using the select function and a five seconds timeout would perform at best ~1000 tests/second. The limitation is mainly due to the fact that the FD_SETSIZE value, on most Linux systems, is set by default to 1024. This means that the maximum file descriptor identifier that select can handle is 1024 (and thus the number of descriptors is inferior or equal to this limit). In the end, this limits the number of sockets that can be opened at the same time, thus the overall scan performance.

Fortunately, other models that do not have this limitation exist. epoll is one of them. After adapting our code, our system was able to test about 6k IP/s (using 30k sockets simultaneously). That is less than what masscan and/or zmap can do (in terms of packets/s), but it gave good enough performance for what we needed to do.

In the end, we found about 6500 routers running this backdoor.

Sources for these scanners are not ready to be published yet (trust me), but stay tuned :)

Note about big vs. little endian

The people that coded this backdoor didn't care about the endianness of the underlying CPU. That's why the signature that is received can have two different values.

To exploit the backdoor, one has to first determine the endianness of the remote router. This is done by checking the received signature: 0x53634D4D means little-endian, 0x4D4D6353 big-endian.

The nerve of a good patch: information

As we are looking to provide a patch for the firmware, we needed to look at what was around, which hardware, where it is, and so on.

Hardware devices

We tried to identify the different hardware devices. It is not obvious at first, and there are several ways to do this by hand:

  • Look for this information in the web interface;
  • Parse the various configuration scripts;
  • grep for "netgear", "linksys" and other manufacturers;
  • Look for files with a specific names (like WAG160N.ico)

It is a bit hard to automate that process. Fortunately for us, a "version" field can be obtained from the backdoor. This field seems to be consistent across the same hardware. Unfortunately, the mapping between this field version and the real hardware still has to be done by hand. This process is not perfect but we haven't seen so far two different hardwares with the same "version" field.

Moreover, Cisco WAP4410 access points don't support this query but can be identified through the "sys_desc" variable (in its internal configuration).

The final repartition of hardware models is the following:

As you can see, there is still work to do in order to identify all the different hardwares.

Country statistics

Another interesting statistics is the repartition by country of the backdoored routers (based on Maxmind public GeoIP database):

The unlucky ones are the United States, followed by the Netherlands, with China very close.

Reconstructing a new filesystem

In order to provide a new clean filesystem, we needed to dump one from a router. So, we firstly used the backdoor to retrieve such a filesystem. Moreover, analyzing it makes it easier to understand how the router works and is configured.

We made our experiments on our Linksys WAG200N. These techniques might or might not work on others. On this particular router (and maybe others), a telnet daemon executable is available (/usr/sbin/utelnetd). It can be launched through the backdoor and directly drops a root shell.

The backdoor allows us to execute command as a root user and to copy some files. Unfortunately, on many routers, there is no SSH daemon that can be started or even a netcat utility for easy and quick transfers. Moreover, the only partition that is writable is generally a RAMFS mounted on /tmp.

So let's see what are our options here.

Option 1 : download through the web server

Lots of routers have a web server that is used for their configuration. This web server (as every process running on the router) is running as the root user. This is potentially a good way to download files (and the rootfs) from the router.

For instance, on the Linksys WAG200N (and others), /www/ppp_log is a symbolic link to /tmp/ppp_log (for the web servers to show the PPP log). Thus, the root FS can be download like this:

  • Get a root shell thanks to the backdoor
# cd /tmp
# mkdir ppp_log
# cd ppp_log && cat /dev/mtdblock/0 >./rootfs
  • When it's done, on your computer: wget http://IP/ppp_log/rootfs

The MTD partition to use can be identified thanks to various methods. /proc/mtd is a first try. In our case:

# cat /proc/mtd
dev: size erasesize name
mtd0: 002d0000 00010000 "mtd0"
mtd1: 000b0000 00010000 "mtd1"
mtd2: 00020000 00010000 "mtd2"
mtd3: 00010000 00010000 "mtd3"
mtd4: 00010000 00010000 "mtd4"
mtd5: 00040000 00010000 "mtd5"

The name does not give a lot of information. But, from the size of the partitions, we can guess that mtd0 is the rootfs (~2.8Mb). Moreover, a symlink in /dev validate this assumption:

# ls -la /dev
[...] root -> mtdblock/0

Option 2 : MIPS cross compilation, netcat and upload

Another way to dump the rootfs is to cross compile a tool like netcat for the targeted architecture. Remember, we have little and big endian MIPS systems. Thus, we potentially need a cross compilation chain for these two systems.

By analyzing some binaries on our test router, it appears that the uClibc library and tool chains have been used by the manufacturer, with versions from 2005. Thus, we downloaded one of the version of uClibc that was released this year. After some difficulties to find the old versions of binutils, gcc (and others) and getting a working GCC 3.x compiler, our MIPSLE cross compiler was ready.

Then, we grabbed the netcat sources and compiled them. There are multiple ways that can be used to upload our freshly compiled binary to the router. The first one is to use the write "feature" of the backdoor:

$ ./poc.py --ip IP --send_file ./nc --remote-filename nc

This has been implemented in Eloi's POC here: https://raw.github.com/elvanderb/TCP-32764/master/poc.py .

The issue with this feature is that it seems to crash with "big" files.

Another technique is to use echo -n -e to transfer our binary. It works but is a bit slow. Also, the connection is sometimes closed by the router, so we have to restart where it stopped. Just do:

$ ./poc.py --ip IP --send_file2 ./nc --remote-filename nc

The MIPSEL netcat binary can be downloaded here: https://github.com/quarkslab/linksys-wag200N/blob/master/binaries/nc?raw=true.

Once netcat has been uploaded, simply launch on the router:

# cat /dev/mtdblock/0 |nc -n -v -l -p 4444

And, on your computer:

$ nc -n -v IP 4444 >./rootfs.img

Patch me if you can (yes we can)

Please note that everything that is described here is experimental and should be done only if you exactly know what you are doing. We can't be held responsible for any damage you will do to your beloved routers!

Note: the tools mentioned here are available on GitHub:https://github.com/quarkslab/linksys-wag200N.

Now that we have the original squashfs image, we can extract it. It is a well-known SquashFS, so let's grab the latest version (4.0 at the time of writing this article) and "unsquash" it:

$ unsquashfs original.img
Parallel unsquashfs: Using 8 processors
gzip uncompress failed with error code -3
read_block: failed to read block @0x1c139c
read_fragment_table: failed to read fragment table block
FATAL ERROR:failed to read fragment table
$ unsquashfs -s original.img
Found a valid little endian SQUASHFS 2:0 superblock on ./original.img
[...]
Block size 32768

This is the same issue that Eloi pointed out in its slides. What he shows is that he had to force LZMA to be used for the extraction. With the fixes he provided (exercise also left to the reader), we can extract the SquashFS:

# unsquashfs-lzma original.img
290 inodes (449 blocks) to write
created 189 files
created 28 directories
created 69 symlinks
created 32 devices
created 0 fifos

What actually happened is that, back in 2005, the developer of this firmware modified the SquashFS 2.0 tools to use LZMA (and not gzip). Even if Eloi's "chainsaw" solution worked for extraction, it will not allow us to make a new image with the 2.0 format. So, back with the chainsaw, grab the squashfs 2.2 release from sourceforge, the LZMA 4.65 SDK, and make squashfs use it with this patch:https://github.com/quarkslab/linksys-wag200N/blob/master/src/squashfs-lzma.patch. The final sources can be downloaded here:https://github.com/quarkslab/linksys-wag200N/tree/master/src/squashfs2.2-r2-lzma.

With our new and freshly compiled SquashFS LZMA-enhanced 2.2 version back from the dead, we can now reconstruct the Linksys rootfs image. It is important to respect the endianness and the block size of the original image (or your router won't boot anymore).

$ ./squashfs2.2-r2-lzma/squashfs-tools/mksquashfs rootfs/ rootfs.img -2.0 -b 32768
Creating little endian 2.0 filesystem on modified-bis.img, block size 32768.
Little endian filesystem, data block size 32768, compressed data, compressed metadata, compressed fragments
[...]
$ unsquashfs -s rootfs.img
Found a valid little endian SQUASHFS 2:0 superblock on rootfs.img.
Creation or last append time Wed Jan 22 10:38:29 2014
Filesystem size 1829.09 Kbytes (1.79 Mbytes)
Block size 32768
[...]

Now, let's begin with the nice part. To test that our image works, we'll upload and flash it to the router. This step is critical, because if it fails, you'll end up with a router trying to boot from a corrupted root filesystem.

First, we use our previously compiled netcat binary to upload the newly created image (or use any other method of your choice):

On the router side: .. code:

# nc -n -v -l -p 4444 >/tmp/rootfs.img

On the computer side: .. code:

$ cat rootfs.img |nc -n -v IP 4444

When it's finished, have a little prayer and, on the router side: .. code:

# cat /tmp/rootfs.img >/dev/mtdblock/0

Then, plug off and on your router! If everything went well, your router should have rebooted just like before. If not, then you need to reflash your router another way, using the serial console or any JTAG port available (this is not covered here).

Now, we can simply permanently remove the backdoor from the root filesystem:

# cd /path/to/original/fs
# rm usr/sbin/scfgmgr
# Edit usr/etc/rcS and remove the following line
/usr/sbin/scfgmgr

Then, rebuild your image as above, upload it, flash your router and the backdoor should be gone forever! It's up to you to build an SSH daemon to keep a root access on your router if you still want to play with it.

Linksys WAG200N patch procedure

For those who would just like to patch their routers, here are the steps. Please note that this has only been tested on our Linksys WAG200N! It is really not recommanded to use it with other hardware. And, we repeat it, we cannot be held responsible for any harm on your routers! Use this at your own risk.

$ ./poc.py --ip IP --send_file2 nobackdoor.img --remote-filename rootfs
  • Then get a root shell on your router:
$ ./poc.py --ip IP --shell
  • Check that the file sizes are the same:
# ls -l /tmp/rootfs
# it should be 1875968 bytes
  • To be sure, just redownload the uploaded image thanks to the web server and check that they are the same:
# mkdir /tmp/ppp_log
# ln -s /tmp/rootfs /tmp/ppp_log
And, on your computer:
$ wget http://IP/ppp_log/rootfs
$ diff ./rootfs /path/to/linksysWAG200N.nobackdoor.rootfs
# cat /tmp/rootfs >/dev/mtdblock/0

Conclusion

This article showed some statistics about the presence of the backdoor found by Eloi Vanderbéken and how to fix one of the hardware affected. Feel free to comment any mistakes here, and/or provide similar images and/or fixes for other routers :)

Acknowledge

  • Eloi Vanderbéken for his discovery and original POC
  • Fred Raynal, Fernand Lone-Sang, @pod2g, Serge Guelton and Kévin Szkudlaspki for their corrections

bro: just get to the point!

Nvidia marketing manager killed during train rescue attempt | Polygon

$
0
0

Comments:"Nvidia marketing manager killed during train rescue attempt | Polygon"

URL:http://www.polygon.com/2014/1/25/5344390/nvidia-marketing-manager-killed-during-train-rescue-attempt


Nvidia marketing manager Philip Scholz was killed on Jan. 20 after attempting to pull a man off the train tracks at the Santa Clara Caltrain Station in California, Mercury News reports.

According to surveillance footage taken just before his death, Scholz laid on his stomach and attempted to help pull the man to safety just before the train hit both men around 5:30 p.m. at 50 to 70 mph. Scholz was killed in the accident, while the man he helped remains in critical condition at the hospital. His identity has not yet been released.

Scholz, age 35, was raised in Washington. He attended Santa Clara University in California and was married to Emily Scholz.

The memorial for Philip Scholz will take place at 10 a.m. on Feb. 10 at the Veterans Memorial Building in Pleasanton.

Making GIFs from Video Files with Python - __del__( self )

$
0
0

Comments:"Making GIFs from Video Files with Python - __del__( self )"

URL:http://zulko.github.io/blog/2014/01/23/making-animated-gifs-from-video-files-with-python/#


Sometimes producing a good animated GIF requires a few advanced tweaks, for which scripting can help. So I added a GIF export feature to MoviePy, a Python package originally written for video editing.

For this demo we will make a few GIFs out of this trailer:

Converting a video excerpt into a GIF

In what follows we import MoviePy, we open the video file, we select the part between 1’22.65 (1 minute 22.65 seconds) and 1’23.2, reduce its size (to 30% of the original) and save it as a GIF:

1 2 3 4 5 6 from moviepy.editor import * VideoFileClip("./frozen_trailer.mp4").\ subclip((1,22.65),(1,23.2)).\ resize(0.3).\ to_gif("use_your_head.gif")

Cropping the image

For my next GIF I will only keep the center of the screen. If you intend to use MoviePy, note that you can preview a clip with clip.preview(). During the preview clicking on a pixel will print its position, which is convenient for cropping with precision.

1 2 3 4 5 kris_sven = VideoFileClip("./frozen_trailer.mp4").\ subclip((1,13.4),(1,13.9)).\ resize(0.5).\ crop(x1=145,x2=400).\ # remove left-right borders to_gif("kris_sven.gif")

Freezing a region

Many GIF makers like to freeze some parts of the GIF to reduce the file size and/or focus the attention on one part of the animation.

In the next GIF we freeze the left part of the clip. To do so we take a snapshot of the clip at t=0.2 seconds, we crop this snapshot to only keep the left half, then we make a composite clip which superimposes the cropped snapshot on the original clip:

1 2 3 4 5 6 7 8 9 10 11 12 anna_olaf = VideoFileClip("./frozen_trailer.mp4").\ subclip(87.9,88.1).\ speedx(0.5).\ # Play at half speed resize(.4) snapshot = anna_olaf.\ crop(x2= anna_olaf.w/2).\ # remove right half to_ImageClip(0.2).\ # snapshot of the clip at t=0.2s set_duration(anna_olaf.duration) CompositeVideoClip([anna_olaf, snapshot]).\ to_gif('anna_olaf.gif', fps=15)

Freezing a more complicated region

This time we will apply a custom mask to the snapshot to specify where it will be transparent (and let the animated part appear) .

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 import moviepy.video.tools.drawing as dw anna_kris = VideoFileClip("./frozen_trailer.mp4", audio=False).\ subclip((1,38.15),(1,38.5)).\ resize(.5) # coordinates p1,p2 define the edges of the mask mask = dw.color_split(anna_kris.size, p1=(445, 20), p2=(345, 275), grad_width=5) # blur the mask's edges snapshot = anna_kris.to_ImageClip().\ set_duration(anna_kris.duration).\ set_mask(ImageClip(mask, ismask=True)) CompositeVideoClip([anna_kris,snapshot]).\ speedx(0.2).\ to_gif('anna_kris.gif', fps=15, fuzz=3) # fuzz= GIF compression

Time-symetrization

Surely you have noticed that in the previous GIFs, the end did not always look like the beginning. As a consequence, you could see a disruption every time the animation was restarted. A way to avoid this is to time-symetrize the clip, i.e. to make the clip play once forwards, then once backwards. This way the end of the clip really is the beginning of the clip. This creates a GIF that can loop fluidly, without a real beginning or end.

1 2 3 4 5 6 7 8 9 10 11 12 def time_symetrize(clip): """ Returns the clip played forwards then backwards. In case you are wondering, vfx (short for Video FX) is loaded by>>> from moviepy.editor import * """ return concatenate([clip, clip.fx( vfx.time_mirror )]) VideoFileClip("./frozen_trailer.mp4", audio=False).\ subclip(36.5,36.9).\ resize(0.5).\ crop(x1=189, x2=433).\ fx( time_symetrize ).\ to_gif('sven.gif', fps=15, fuzz=2)

Ok, this might be a bad example of time symetrization,it makes the snow flakes go upwards in the second half of the animation.

Adding some text

In the next GIF there will be a text clip superimposed on the video clip.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 olaf = VideoFileClip("./frozen_trailer.mp4", audio=False).\ subclip((1,21.6),(1,22.1)).\ resize(.5).\ speedx(0.5).\ fx( time_symetrize ) # Many options are available for the text (requires ImageMagick) text = TextClip("In my nightmares\nI see rabbits.", fontsize=30, color='white', font='Amiri-Bold', interline=-25).\ set_pos((20,190)).\ set_duration(olaf.duration) CompositeVideoClip( [olaf, text] ).\ to_gif('olaf.gif', fps=10, fuzz=2)

Making the gif loopable

The following GIF features a lot of snow falling. Therefore it cannot be made loopable using time-symetrization (or you will snow floating upwards !). So we will make this animation loopable by having the beginning of the animation appear progressively (fade in) just before the end of the clip. The montage here is a little complicated, I cannot explain it better than with this picture:

1 2 3 4 5 6 7 8 9 10 11 12 13 castle = VideoFileClip("./frozen_trailer.mp4", audio=False).\ subclip(22.8,23.2).\ speedx(0.2).\ resize(.4) d = castle.duration castle = castle.crossfadein(d/2) CompositeVideoClip([castle, castle.set_start(d/2), castle.set_start(d)]).\ subclip(d/2, 3*d/2).\ to_gif('castle.gif', fps=5,fuzz=5)

Another example of a GIF made loopable

The next clip (from the movie Charade) was almost loopable: you can see Carry Grant smiling, then making a funny face, then coming back to normal. The problem is that at the end of the excerpt Cary is not exactly in the same position, and he is not smiling as he was at the beginning. To correct this, we take a snapshot of the first frame and we make it appear progressively at the end. This seems to do the trick.

1 2 3 4 5 6 7 8 9 10 11 12 carry = VideoFileClip("../videos/charade.mp4", audio=False).\ subclip((1,51,18.3),(1,51,20.6)).\ crop(x1=102, y1=2, x2=297, y2=202) d = carry.duration snapshot = carry.to_ImageClip().\ set_duration(d/6).\ crossfadein(d/6).\ set_start(5*d/6) CompositeVideoClip([carry, snapshot]).\ to_gif('carry.gif', fps=carry.fps, fuzz=3)

Big finish: removing the background

Let us dive further into the scripting madness: we consider this video around 2’16:

And we will remove the background to make this gif (with transparent background):

The main difficulty was to find what the background of the scene is. To do so, the script gathers a few images in which the little pigs are are different positions (so that every part part of the background is visible on at least several (actually most) of the slides, then it takes the pixel-per-pixel median of these pictures, which gives the background.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 # Requires Scikit Images installed import numpy as np import skimage.morphology as skm import skimage.filter as skf from moviepy.editor import * ### LOAD THE CLIP pigsPolka = VideoFileClip("pigs_in_a_polka.mp4").\ subclip((2,16.85),(2,35)).\ resize(.5).\ crop(x1=140, y1=41, x2=454, y2=314) ### COMPUTE THE BACKGROUND # There is no single frame showing the background only (there # is always a little pig in the screen) so we use the median of # several carefully chosen frames to reconstitute the background. # I must have spent half an hour to find the right set of frames. times = (list(np.linspace(2.3,4.2,30))+ list(np.linspace(6.0,7.1,30))+ 8*[6.2]) frames_bg = [pigsPolka.get_frame(t) for t in times] background = np.percentile(np.array(frames_bg), 50,axis=0) ### MASK GENERATION def get_mask_frame(t): """ Computes the mask for the frame at time t """ # THRESHOLD THE PIXEL-TO-PIXEL DIFFERENCE # BETWEEN THE FRAME AND THE BACKGROUND im = pigsPolka.get_frame(t) mask = ((im-background)**2).sum(axis=2) > 1500 # REMOVE SMALL OBJECTS mask = skm.remove_small_objects(mask) # REMOVE SMALL HOLES (BY DILATIATION/EROSION) selem=np.array([[1,1,1],[1,1,1],[1,1,1]]) for i in range(2): mask = skm.binary_dilation(mask,selem) for i in range(2): mask = skm.binary_erosion(mask,selem) # BLUR THE MASK A LITTLE mask = skf.gaussian_filter(mask.astype(float),1.5) return mask mask = VideoClip(ismask=True).\ set_get_frame(get_mask_frame).\ set_duration(pigsPolka.duration) ### LAST EFFECTS AND GIF GENERATION pigsPolka.set_mask(mask).\ subclip(12.95,15.9).\ fx(vfx.blackwhite).\ # black & white effect ! to_gif('pigs_polka.gif', fps=10, dispose=True, fuzz=10)

SoundCloud Raises $60 Million At A $700 Million Valuation | TechCrunch

$
0
0

Comments:"SoundCloud Raises $60 Million At A $700 Million Valuation | TechCrunch"

URL:http://techcrunch.com/2014/01/25/soundcloud-raises-60-million-at-700-million-valuation/


SoundCloud recently closed a Series D round of funding led by Institutional Venture Partners with the Chernin Group. The Wall Street Journal first reported the news. It has since been confirmed by IVP and SoundCloud. Previous investors also participated in the round, including Kleiner Perkins Caufield & Byers, GGV Capital, Index Ventures and Union Square Ventures.

SoundCloud’s ultimate goal is to become the audio platform of the web, or the YouTube of audio. Just like YouTube, user-generated content remains the startup’s fuel. Every minute, 12 hours of sound and music are uploaded to the platform. For comparison’s sake, YouTube reports 100 hours of content uploaded every minute.

Many up-and-coming electronic music artists use SoundCloud to release mixtapes and share them around the web. Well-known musicians also release singles or live recordings on the platform to share them with their fans on Twitter or Facebook. In other words, SoundCloud is the perfect place to transform a music file into a URL and embeddable music player.

Seeing American VC firms putting a lot of faith in a European startup is a big win for the Berlin startup scene.

Back in October at Disrupt Europe, SoundCloud co-founder and CEO Alexander Ljung said that the company was focused on growth and engagement.

That’s why it simplified its premium offering. “The big thing when we made that change is that we went from four different account levels with a fairly wide range of pricing to two different levels with a smaller range,” Ljung said.

With a free account, you can upload up to 2 hours of music, while the most expensive plan allows you to upload an unlimited amount of music for $12 a month (€9). Subscriptions used to be much more expensive, and an unlimited plan was out of reach for many amateur artists.

Today’s funding news is probably the consequence of this focus on growth. At Disrupt, Ljung said that subscription numbers were “pretty much exactly on our forecast.” Seeing American VC firms putting a lot of faith in a European startup is a big win for the Berlin startup scene.

But SoundCloud still has to find the major hidden, yet reachable, treasure. Most of SoundCloud’s 250 million users turn to SoundCloud to consume music, listen to artist-curated playlists and comment. They aren’t content creators; they carefully curate a music feed by following artists on the platform. For now, they don’t generate a lot of money for the company — the website has never been inundated with ads.

According to Re/code, the company is now trying to sign content deals with major music labels. It would put SoundCloud in the same league as other big music companies.

It’s still unclear whether the company wants to create yet another subscription service like Spotify or Rdio, a music store like the iTunes Store or the Amazon MP3 Store, a radio-like experience like Pandora or iTunes Radio, or something completely different. It’s a crowded market, but signing these deals is an important step for the company.

With music labels on board, the company could get more users, more monetization options and better content to convince advertisers.

Democratization Is Badass: Soundcloud’s Alexander Ljung in Conversation with Josh Constine

Viewing all 5394 articles
Browse latest View live