Digital information and copyright law often do not play well together, so much so that it looks like Congress may be ready to start a “comprehensive review” of the entire copyright system. Usually we hear about this conflict as it pertains to sharing of copyrighted digital files – library lending of ebooks, for example – but in the past few weeks we have seen news that negotiations at the World Intellectual Property Organization (WIPO) have highlighted a different conflict, this one between the needs of the visually impaired and the revenue protections built into some current copyright laws. Advocates for the blind want an international exception to copyright that will permit conversion of content to accessible formats, but content producers want to require that any such exception pass the “three-step” test that some countries apply to limit interference with the right of copyright holders to make money.
- Disney, Viacom and other MPAA members join book publishers to weaken a treaty for the blind (Huffington Post/James Love) “…the June 2011 text was offered to make it easier for countries to ratify the convention if the national practice was to permit uses of works for the blind under other more general provisions in the national copyright law, such as a law on fair use or fair practices, an educational exception or a disability rights law. As WIPO turned to an examination of broader copyright limitations and exceptions issues, the EU began to backtrack and demand additional language in the treaty for the blind text that would require any of the exceptions set out in the treaty be implemented subject to a three step test, raising questions about what if anything the treaty would permit.”
- On copyright and rights of persons with disabilities: WIPO treaty for the blind (Kluwer Copyright Blog/Tatiana Sinodinou) “Since the conversion to accessible formats is an act of exploitation which is connected to a new market, copyright holders have a strong interest to control it tightly. An example of this controversy in the USA is a case in which the Authors Guild objected to the Kindle 2’s robotic text-to-speech feature, which can read Kindle books aloud in a synthesized voice, claiming that Amazon offered a product that was competitive to audio books, since it would cut into their sales.”
- Blind advocates: Hollywood lobbying threatens deal for accessible books (Ars Technica/Timothy B. Lee) “The blind community just wants easier access to books. US rightsholders have other ideas. In a Wednesday phone interview, a spokesman for the AAP [Association of American Publishers] told us that any treaty that enhances access for blind people must be coupled with provisions that shore up the rights of copyright holders. His organization has also pushed for additional restrictions on when non-profit organizations would be allowed to produce accessible versions of books. These groups have the ear of the Obama administration, and as a result the demands of rightsholders have dominated recent rounds of negotiations.”
- IFLA attends Informal Session and Special Session of the Standing Committee on Copyright and Related Rights, 18-20 April 2013 (International Federation of Library Associations and Institutions Committee on Copyright and other Legal Matters) “Unless substantial changes are made, reading disabled people around the world, particularly in developing and transition countries, would be granted a ‘trophy treaty’ that will not work on the ground, so they would continue to be denied or restricted in their enjoyment of what is a most important human right for everyone, the right to read which underpins every individual’s ability to succeed in the world. This could be worse for reading disabled people than no treaty at all as it would not be reopened for decades to put it right.”
Better news fact:
The news recently has not been all bad for the blind. On May 1, Amazon announced some new features for the Kindle that will improve accessibility for the visually impaired.
Over a year ago, the Internet Corporation for Assigned Names and Numbers (ICANN), the organization that is responsible for coordinating Internet domain names and IP addresses, announced that it would accept applications for new generic top-level domains (gTLDs). [See 4cast #235.] Current gTLDs include .com and .org, but now companies and organizations have submitted applications for a wide range of proposed new gTLDs – such as .book. Google has gone one step further and proposed a “dotless” domain that would consist of only one word: “search” (http://search/). And that proposal has stirred up the debate about the wisdom of allowing dotless domains on the Internet and Google’s motives.
- Google wants to operate .search as a “dotless” domain, plans to open .cloud, .blog and .app to others (TechCrunch/Frederic Lardinois) “Google plans to run http://search/ as a redirect service that ‘allows for registration by any search website providing a simple query interface.’ ‘The mission of the proposed gTLD, .search, is to provide a domain name space that makes it easier for Internet users to locate and make use of the search functionality of their choice,’ Google writes in its amended application.”
- On dotless domains and domainless TLD’s (The Rolled-Up Newspaper/Andrew Johnson) “So why would Google want to promote a way to search elsewhere when there’s no real threat to their position as top dog in search? […] ICANN is probably much more amenable to allowing a dotless TLD–a risky and huge departure from standard practice–knowing its operator is tied to a promise to include others. In this case, Google would just be investing in familiarizing people with the concept of a domain-less TLD, dotted or not, and they plan to do this to additional TLD’s down the road: first proprietary TLD’s (‘google,’ ‘android’) and maybe later generic TLD’s in a proprietary manner, if they could swing it (‘maps’ being exclusive to Google Maps, or ‘translate’ from Google Translate, for instance).”
- SSAC report on dotless domains [pdf] (ICANN Security and Stability Advisory Committee) “Other security issues may arise if dotless domains are permitted to host content directly. The advent of such hosting will violate a longstanding (more than 20 year) assumption that a dotless hostname is within an organization’s trust sphere. In Windows, for instance, this means that a dotless host may be considered to be in the Intranet zone, and is accorded the security privileges conveyed to sites in that zone. These privileges are significant and may, depending on the user’s configuration, permit code execution.”
- ICANN, the GAC, SSAC and gTLDs: Challenges with dotless domains and closed generics (MSDN Blogs/M3 Sweatt) “As we summarized in our comments [pdf], Microsoft supports and endorses the report’s recommendations against use of dotless domains. There are significant security considerations around the use of dotless domains with new gTLDs, generally a bad idea that would create significant security risks for people using the Internet. Dotless domain names are often resolved by operating systems, browsers and other products to addresses on the local network / intranet. Our recommendation is to use Fully Qualified Domain Names (FQDNs) – sometimes referred to an absolute domain name – to ensure that people get where they are expecting when they type in an address on the Internet URL.”
At the very least, handling dotless domains would require extensive revisions to current web browsers and Internet apps. Such software typically interprets and completes shortened domain names and does not insist on use of fully qualified domain names.
At this time of year, as people get more serious about planning summer vacations, travel guidebooks become a popular item at the library, though perhaps not as popular as they once were. The print guidebook industry has never really recovered from the 2008 recession, which caused many people to delay their leisure travel, and has partially been replaced by various online travel resources. Last August, Google expanded its holdings in the travel business when it bought the Frommer’s travel guides for $22 million, but what Google eventually did with Frommer’s a few weeks ago is an interesting illustration of the kinds of deals companies will do just to get some more social data.
- Google quietly pulls plug on Frommer’s print travel guidebooks (Skift/Jason Clampet) “Starting with Frommer’s New York City With Kids, which can still be found on Amazon, Barnes & Noble, and in other bookstore inventories and was supposed to publish on February 19, the entire future list of Frommer’s titles will not see the light of day. Many of the authors attached to these 29 titles told Skift that they were informed by editors now working at Google that the books would not publish.”
- Google mines Frommer’s Travel for social data, then sells the name back (Ars Technica/Megan Geuss) “Google bought ITA, a popular travel data service, in 2010, and the restaurant rating guide Zagat in 2011. But it was unclear how exactly Frommer’s would live on in Google’s pantheon. Last week, Google paradoxically sold the Frommer’s title back to the 83-year-old eponymous founder, who said he intended to resume publishing travel information under his name.”
- Google sold Frommer’s Travel — but kept all the social media data (PaidContent/Jeff John Roberts) “The social media data will power Google’s ongoing forays into the travel market in which it offers services like flight and hotel search, and Zagat reviews. In retrospect, it appears that the social media data may have been Google’s goal along when it obtained Frommer’s from publisher John Wiley & Sons for $22 million in August 2012.”
- Google, Frommer’s, and trolling for social networking data (Lens 360/Bruce Guptill) “What did Google get for seven months of effort and $22M? Petabytes of travel-related social networking contacts and their related behavioral data. Google is retaining all of the data from former Frommer’s followers, from Frommer’s itself as well as from Facebook, Twitter, FourSquare, and of course, Google+. Now, Google has a wealth of social network user data to integrate with its well-organized, international travel advisory brand – Zagat – and its data management service/platform optimized for travel data use – ITA.”
While guidebook sales in the US dropped 10% to 20% after 2008, those sales seem to have stabilized recently.
People who work with Internet security have for some time advocated the use of “two-factor authentication” instead of a simple password control over access to sensitive or private information. Nobody likes to make things harder than we think they need to be, however, so adoption of two-factor authentication has been fairly limited. But last week, that may have begun to change, as Microsoft announced that two-factor authentication will be available (though not necessarily required) for all Windows products and services.
- Microsoft rolling out two-factor authentication across its product line (ZDNet/Mary Jo Foley) “Two-factor authentication is aimed at reducing the likelihood of online identity theft, phishing and other scams because the victim’s password would no longer be enough to give a thief access to their information. Apple, PayPal, Google, Facebook and other vendors already have implemented two-factor authentication.”
- Microsoft Account gets more secure (Official Microsoft Blog) “This release enables optional two-step verification for your entire Microsoft account. Two-step verification is when we ask you for two pieces of information anytime you access your account — for example, your password plus a code sent to a phone or email on file as security info. More than a year ago, we began bringing two-step verification for certain critical activities, like editing credit cards and subscriptions at commerce.microsoft.com and xbox.com, or accessing files on another one of your computers through SkyDrive.com. For these scenarios, two-step verification is required 100 percent of the time for everyone, given the sensitive nature of these tasks.”
- Apple ID: Frequently asked questions about two-step verification for Apple ID (Apple Support) “Two-step verification simplifies and strengthens the security of your account. After you turn it on, there will be no way for anyone to access and manage your account at My Apple ID other than by using your password, verification codes sent your trusted devices, or your Recovery Key.”
- AP Twitter hack sends stock market spinning (New York Magazine/Kevin Roose) “In my opinion, there is really only one lesson from this afternoon’s flash-crash: namely, Twitter needs multi-step authentication for verified and/or news-breaking accounts now. Twitter has gotten calls for stronger security measures for years, and it’s always been pretty reluctant to promise anything. (Last year, the company would say only, “We’ve certainly explored two-factor authentication among other security measures, and we continue to introduce features, such as https, to help users keep their accounts secure.”) But after today’s data point, it can’t wait any longer.”
Good two-factor authentication combines a Knowledge Factor (something the user knows) with a Possession Factor (something the user has).
We’ve been talking about a “Futures Committee” in the Consortium, and at the PALS Office, we’ve begun to look at potential projects for “Discovery Phase 2″. During the Midwest Library Technology Conference at Macalester, I spoke to a former colleague at the University of Wisconsin – River Falls. At UW they are beginning to develop an RFI for a discovery tool, and are developing their RFI and questions in a public WIKI. I thought it would be useful to see what they are thinking, so if you are interested you can see it.
The questions for the RFI are on the CUWL wiki at http://cuwlwiki.wetpaint.com/ . The RFI will be posted on Wisconsin’s Vendornet http://vendornet.state.wi.us/vendomet/default.asp . When you get to the WIKI, look at the links under “Resource Discovery Request for Information (RFI)”.
It looks to me like they are looking for a Summon-like product that harvests data from a number of sources: digital repositories, archival finding aids, library catalogs and article-level metadata; and different formats: MARC, Dublin, CORE and EAD, etc. The goal is to provide a single interface and access point to a wide variety of materials beyond the library OPAC “silo”.
A “consolidated” or “integrated” search tool like this, using a pre-harvested meta-index avoids some of the issues of mere “Federated” searching, issues such as slow response time and difficulty with relevance ranking.
Federated searching seemed like a good idea, but in practice results were returned too slowly and each database would only return 50 records or so at a time. Therefore one wasn’t getting relevance ranking on all the records in all the result sets one was searching, but only relevance ranking on the first 50 records from each database. Those 50 were typically were the most recent so results included lots of newspaper articles.
The next idea beyond federated searching is the “integrated” or “consolidated” search, where the user searches a meta-index instead of broadcasting a federated search. The advantage is faster searching and better relevance ranking for all the articles in the result set. EBSCO Integrated Search and Summon from Serials Solutions are two examples, the inclusion of articles in WorldCat Local is another. The disadvantage is cost. My understanding is that they are very expensive, and while I will be talking to integrated search vendors about consortium pricing, it seem unlikely PALS libraries will be able to afford a commercial integrated search product in these difficult times. Integrated searching is another layer and does not replace your database subscriptions, it does not provide the full text, one has to go back to the databases for that. A link resolver is still very useful as the means for getting from the citation returned by the integrated search product to the full text found in licensed resources.
A more cost-effective approach might be to use MnPALS Plus, add non-Aleph and non-MARC data to it, harvest the article metadata in publicly available databases such as PubMed and open access journals, and then supplement that with an open source federated search engine. (eXtensible Catalog has toolkits for some of these processes.) We will be looking at this and other scenarios for the next phase a discovery.
P.S. We are still looking for partners for adding non-Alpeh Non-MARC metadata to MnPALS Plus!