Quantcast
Channel: Privacy Issues – SLA San Francisco Bay Region Chapter
Viewing all articles
Browse latest Browse all 43

Intersect Alert May 1, 2016

$
0
0

Freedom of Information:

22 Years Later, US Still Classifying “Bombshell” Plan to Pull Peacekeepers Out Before Rwanda Genocide
The tinderbox of Rwanda’s ethnic tensions ignited in April 1994 and mass violence engulfed the country in one of the swiftest campaigns of genocide in history. The National Security Archive’s Genocide Documentation Project’s collection of declassified documents on Rwanda numbers in the thousands, and includes an April 15, 1994, State Department cable on the U.S.’s decision to pull United Nations forces out of Rwanda; a fact still withheld by State Department redactors even though the information has been released by the Czech Republic, New Zealand, United Kingdom, and the United Nations and published on the Archive’s website.
On April 20, 1994, the Advisor on African Affairs to French President Mitterrand, Bruno Delaye, stated, “There is nothing to say.” According to UNHCR, 100,000 Rwandans would be dead by the end of April and 800,000 would be displaced. The following day, the International Committee of the Red Cross (ICRC) reported that the fighting that started in central Rwanda at the beginning of the month had spread to the rest of the country. Tens of thousands were dead and hundreds of thousands had fled their homes.
However, a plan by the U.S. and the UN to reduce and eventually withdraw the United Nations Assistance Mission for Rwanda (UNAMIR) was already well underway. On April 15, 1994, the U.S. Mission to the UN dropped a “bombshell” on the Security Council, arguing for the complete termination of UNAMIR and the pullout of all peacekeepers in Rwanda.
Reviewers redacted the historic “bombshell” from a State Department cable, however, even though the fact that the U.S. called for the withdrawal of UNAMIR troops, was previously released to the National Security Archive by the governments of the Czech Republic, New Zealand, and the United Kingdom, and the UN’s Kofi Annan in response to FOIA requests. The information had even been published on the Archive’s website and in the critical oral history conference briefing book, “International Decision-Making in the Age of Genocide: Rwanda 1990-1994,” in June 2014.
https://nsarchive.wordpress.com/2016/04/26/22-years-later-us-still-classifying-bombshell-plan-to-pull-peacekeepers-out-before-rwanda-genocide/.

———————————-

Copyright:

The Misguided Plan to Expand A Performers’ Veto: More “Copyright Creep” Through Policy Laundering
A proposal to rewrite parts of copyright law being pushed by the U.S. Patent and Trademark Office would create new restrictions for filmmakers, journalists, and others using recordings of audiovisual performances. Against the background of the the Next Great Copyright Act lurching forward and the Copyright Office convening a new series of roundtables on the Digital Millennium Copyright Act, few have noticed the USPTO push happening now. But these proposals are a classic instance of copyright creep and are dangerous for users, creators, and service providers alike.
There are many problems with this plan. Here are a few:
Definitions: The definition of “performance” is unclear. Does it include lectures? Political speeches? An a cappella group singing a song that’s in the public domain? A flash mob? This matters a lot, especially for the professional and amateur creators and journalists who will need to obtain a license to capture and share any of these activities, and the even larger group of users who might want to repurpose that material.
Term: 95 years? Really? Admittedly, that’s better than no term at all, but even better would be, say, 14 years—or even the 50 years term that seems to be contemplated in the Treaty.
Damages: The current anti-bootlegging statute says that violators are subject to the same penalties as copyright infringers. Depending on that language is interpreted, anyone who records and shares a “performance” and doesn’t get consent from the performer could be on the hook for up to $150,000 (or more depending on how damages are calculated, another messy question) and potentially attorneys’ fees as well.
Safe Harbors: Currently, it is unclear at best whether the DMCA safe harbors apply to bootlegging claims. That means service providers will worry that any content they host or transmit could subject them to secondary liability if, as will often be the case, the user did not (1) guess correctly about what kinds of consent might be necessary; and (2) obtain that consent.
Potential for abuse: But even if a court concluded that Section 512 applies to these new rights, we have a decade of experience to show that the Section 512 takedown process will be abused to take down lawful content.
What about other limitations? It’s great that the our bootlegging provisions will now be explicitly subject to fair use and the library exceptions. But what about the many other limits on the reach of copyrights? Why not import them all?
Deception: Trade deal supporters often insist that trade agreements involving IP won’t require changes to US law, or only minimal changes. This proposal should serve as a useful demonstration, if such a demonstration were needed, that we can’t trust such claims.
https://www.eff.org/deeplinks/2016/04/another-fine-mess-ustr-has-gotten-us-misguided-plan-expand-performers-rights.

Who’s downloading pirated papers? Everyone
In increasing numbers, researchers around the world are turning to Sci-Hub, which hosts 50 million papers and counting. Over the 6 months leading up to March, Sci-Hub served up 28 million documents. More than 2.6 million download requests came from Iran, 3.4 million from India, and 4.4 million from China. The papers cover every scientific topic, from obscure physics experiments published decades ago to the latest breakthroughs in biotechnology. The publisher with the most requested Sci-Hub articles? It is Elsevier by a long shot—Sci-Hub provided half-a-million downloads of Elsevier papers in one recent week.
These statistics are based on extensive server log data supplied by Alexandra Elbakyan, the neuroscientist who created Sci-Hub in 2011 as a 22-year-old graduate student in Kazakhstan. I asked her for the data because, in spite of the flurry of polarized opinion pieces, blog posts, and tweets about Sci-Hub and what effect it has on research and academic publishing, some of the most basic questions remain unanswered: Who are Sci-Hub’s users, where are they, and what are they reading?
The Sci-Hub data provide the first detailed view of what is becoming the world’s de facto open-access research library. Among the revelations that may surprise both fans and foes alike: Sci-Hub users are not limited to the developing world. Some critics of Sci-Hub have complained that many users can access the same papers through their libraries but turn to Sci-Hub instead—for convenience rather than necessity. The data provide some support for that claim. The United States is the fifth largest downloader after Russia, and a quarter of the Sci-Hub requests for papers came from the 34 members of the Organization for Economic Cooperation and Development, the wealthiest nations with, supposedly, the best journal access. In fact, some of the most intense use of Sci-Hub appears to be happening on the campuses of U.S. and European universities.
http://www.sciencemag.org/news/2016/04/whos-downloading-pirated-papers-everyone.

———————————-

Privacy Issues:

Revealed: Google AI has access to huge haul of NHS patient data
It’s no secret that Google has broad ambitions in healthcare. But a document obtained by New Scientist reveals that the tech giant’s collaboration with the UK’s National Health Service goes far beyond what has been publicly announced. The document – a data-sharing agreement between Google-owned artificial intelligence company DeepMind and the Royal Free NHS Trust – gives the clearest picture yet of what the company is doing and what sensitive data it now has access to.
The agreement gives DeepMind access to a wide range of healthcare data on the 1.6 million patients who pass through three London hospitals run by the Royal Free NHS Trust – Barnet, Chase Farm and the Royal Free – each year. This will include information about people who are HIV-positive, for instance, as well as details of drug overdoses and abortions. The agreement also includes access to patient data from the last five years. The document also reveals that DeepMind is developing a platform called Patient Rescue, which will provide data analytics services to NHS hospital trusts. It states that Patient Rescue will use data streams from hospitals to build other tools, in addition to Streams, that could carry out real-time analysis of clinical data and support diagnostic decisions. One aim, the agreement says, is for these tools to help medical staff adhere to the UK’s National Institute for Health and Care Excellence guidelines. DeepMind is not planning to automate clinical decisions – such as what treatments to give patients – but says it wants to support doctors by making predictions based on data that is too broad in scope for an individual to take in.
https://www.newscientist.com/article/2086454-revealed-google-ai-has-access-to-huge-haul-of-nhs-patient-data/.

———————————-

Publishing:

Comparing Published Scientific Journal Articles to Their Pre-print Versions
Academic publishers claim that they add value to scholarly communications by coordinating reviews and contributing and enhancing text during publication. These contributions come at a considerable cost: U.S. academic libraries paid $1.7 billion for serial subscriptions in 2008 alone. Library budgets, in contrast, are flat and not able to keep pace with serial price inflation. We have investigated the publishers’ value proposition by conducting a comparative study of pre-print papers and their final published counterparts. This comparison had two working assumptions: 1) if the publishers’ argument is valid, the text of a pre-print paper should vary measurably from its corresponding final published version, and 2) by applying standard similarity measures, we should be able to detect and quantify such differences. Our analysis revealed that the text contents of the scientific papers generally changed very little from their pre-print to final published versions. These findings contribute empirical indicators to discussions of the added value of commercial publishers and therefore should influence libraries’ economic decisions regarding access to scholarly publications.
http://arxiv.org/abs/1604.05363.

———————————-

Libraries:

Weeding the Worst Library Books
Last summer, in Berkeley, California, librarians pulled roughly forty thousand books off the shelves of the public library and carted them away. The library’s director, Jeff Scott, announced that his staff had “deaccessioned” texts that weren’t regularly checked out. But the protesters who gathered on the library’s front steps to decry what became known as “Librarygate” preferred a different term: “purged. “Put a tourniquet on the hemorrhage,” one of the protesters’ signs declared. In response, Scott attempted to put his policy in perspective. His predecessor had removed fifty thousand books in a single year, he explained. And many of the deaccessioned books would be donated to a nonprofit—not pulped. Furthermore, after new acquisitions, the collection was actually expected to grow by eighteen thousand books, to a total of nearly half a million. But none of these facts stirred up much sympathy in Berkeley. A thousand people signed a petition demanding that Scott step down—and, in the end, he did.
Public libraries serve practical purposes, but they also symbolize our collective access to information, so it’s understandable that many Berkeley residents reacted strongly to seeing books discarded. What’s more, Scott’s critics ultimately contended that he had not been forthcoming about how many books were being removed, or about his process for deciding which books would go. Still, it’s standard practice-and often a necessity-to remove books from library collections. Librarians call it “weeding,” and the choice of words is important: a library that “hemorrhages” books loses its lifeblood; a librarian who “weeds” is helping the collection thrive. The key question, for librarians who prefer to avoid scandal, is which books are weeds.
Mary Kelly and Holly Hibner, two Michigan librarians, have answered that question in multiple ways. They’ve written a book called “Making a Collection Count: A Holistic Approach to Library Collection Management,” which proposes best practices for analyzing library data and adapting to space constraints. But they are better known for calling attention to the matter with a blog: Awful Library Books. Kelly and Hibner created the site in 2009. Each week, they highlight books that seem to them so self-evidently ridiculous that weeding is the only possible recourse. They often feature books with outlandish titles, like “Little Corpuscle,” a children’s book starring a dancing red blood cell; “Enlarging Is Thrilling,” a how-to about—you guessed it—film photography; and “God, the Rod, and Your Child’s Bod: The Art of Loving Correction for Christian Parents.”
http://www.newyorker.com/books/page-turner/weeding-the-worst-library-books.

LA Archives Have Their Own TV Show
In Los Angeles, anyone can be a star – even a library collection. The story of Lost LA, which draws on a Los Angeles library consortium’s local collections, proves that with the right tools (and a willingness to collaborate), libraries can reach an even wider audience. Lost LA wasn’t always a star. A few years ago, it was merely an attempt by Nathan Masters, manager of academic events and programming at the University of Southern California (USC) Libraries, to bring more attention to the university’s collections. A few years ago KCET, a local public television station that had recently broken with PBS, approached USC with a unique question. Would the university be willing to provide the station regular editorial content about Los Angeles history?
Making archives into TV isn’t a simple process. Masters estimates that the production team involved more than 100 people, in part because of a production model that relied on multiple small films. Each episode entailed a huge effort—one that Masters says has paid off. “Now more than ever, we’re getting research inquiries from scholars, journalists, and professionals in architecture and urban planning who heard about us through Lost LA or the web series that we had before that,” he says.
http://lj.libraryjournal.com/2016/04/academic-libraries/la-archives-have-their-own-tv-show/.

Please feel free to pass along in part or in its entirety; attribution appreciated.
The Intersect Alert is a newsletter of the Government Relations Committee, San Francisco Bay Region Chapter, Special Libraries Association.


Viewing all articles
Browse latest Browse all 43

Latest Images

Trending Articles





Latest Images