CFP: Digital Approaches to Cartographic Heritage

By way of maphist:

Third International Workshop
Digital Approaches to Cartographic Heritage
Barcelona, Catalunya, Spain 26 – 27 June 2008

Organized by the ICA Commission on Digital Technologies in Cartographic Heritage and the Institut Cartogràfic de Catalunya

Announcement - Call for papers

Venue: The Workshop will take place in Barcelona, the capital city of Catalunya, Spain, at the Institut Cartogràfic de Catalunya, Parc de Montjuïc (see map).

Participants and focus: This Workshop is addressed to scholars, researchers, map curators, map collectors, administrators, digital industry / market operators, and students coming from different cultural and educational backgrounds (humanistic, scientific and engineering) whose work is either focused on or affined to cartographic heritage. The Work­shop will offer a common ground to colleagues from various disciplines and practice where they can meet, interact and exchange knowledge, experience, plans and ideas on how the digital revolution and modern information and communication technologies in general can or could be used and contribute to cartographic heritage in terms of acquisition, processing visualization and communication of relevant digital data.

Sessions: The sessions will basically follow the ICA Commission’s terms of reference:
  • Introduce and establish the concept of "cartographic heritage". The multidisciplinary dimension of cartographic heritage.
  • Transformation into digital form of old maps, globes and other cartographic documents. Comparison of digitization methods and technologies and development of relevant standards.
  • Applications of digital techniques to the cartographic study (analysis and interpretation) of old maps and their geometric and thematic content. Tests of various analytic processes and visualization.
  • Development and management of digital map libraries accessible to the general public. Digital tools to assist map curators, to aid the networking of map libraries and to allow in-situ and remote virtual access to cartographic heritage.
  • Digital support for the preservation and restoration of old maps, atlases and globes.
  • The use of Information and Communication Technologies (ICT) and the web for the teaching and for the diffusion of the heritage of cartography and maps to the general public.
  • The ‘digital needs’ of individual map collectors.
Papers: The papers, not exceeding 5.000 words, should be e-mailed by April 10 for inclusion in the Workshop’s CD-ROM, which will be available to all participants.

Proceedings: The presented papers will be published in the proceedings of the Workshop. Some of the papers in a shorter version will be also published in the international web journal on sciences and technologies affined to history of cartography and maps e-Perimetron [ISSN 1790-3769] following the journal’s editorial policy.

Language: The presentations will be given in English as well as the papers sent for inclusion in the Workshop’s CD-ROM and in the Proceedings. Papers to be published in e-Perimetron can be also written in French according to the journal’s editorial policy.

Registration: Free.

Accommodation: Barcelona offers easy accessibility for own booking (travel + hotel packages) in great variety of alternatives. The organizers suggest the participants to plan by their own as early as possible their travel and staying.

Participation form: Please fill the participation form [MSWord DOC] and send it asap to the contact e-mail addresses. Those who intend to present a paper please note a provisional title on the participation form.

Contact: [ Subject: ICA Workshop ]
The Commission Chair: livier@auth.gr
The Workshop’s Desk:
rafael.roset@icc.cat and pazarli@topo.auth.gr

Digital Geography in a Web 2.0 World

Late-breaking news from London about an interesting conference (notice by way of Digital Arts and Humanities):

The Centre for Advanced Spatial Analysis in association with the National Centre for E-Social Science proudly presents "Digital Geography in a Web 2.0 World", a one-day conference at the Barbican Centre, London ... on 20th February.

It will disseminate the work of the GeoVUE (UCL) and MoSeS (Leeds) nodes as well as covering work undertaken on NCeSS's ESRC-funded Business Engagement project (UCL and Manchester) and the Centre for Excellence in Teaching and Learning (CETL) SPLINT project (Leceister, Nottingham and UCL).

Participants must register at http://www.casa.ucl.ac.uk/barbican/programme.asp where the program is available. As this event has several lecture sessions it is quite possible to choose which you would like to attend.
Please note: the "programme" link is to a web page with an embedded GIF showing the conference schedule and details. This will prove completely inaccessible to the blind and visually impaired. I have been unable to find a textual version of the programme online. I suggest that, if you need one, you contact CASA, the organizing institution.

Was bedeutet das "Green"?

I'm really unclear on exactly how it is that "regular mowing along Memorial Parkway" makes Huntsville more "green". Are we talking about neatness, or environmental responsibility? The latest newspaper reporting seems to confuse the two (or maybe it's the self-congratulatory civic leaders they're interviewing).

Now, planting trees, land preservation, picking up litter, preserves/parks, and curbside recycling are all reasonable indicia of green-ness. The work of Forever Wild, the Nature Conservancy and the Huntsville Land Trust is truly laudable. And we shouldn't forget the city's trail/greenway efforts.

But someone needs to tell the mayor that mowing has a big carbon-and-air-quality footprint. Moreover, many would dispute the assertion that we have "good public transportation". There's no bus or train feeders from rapidly-growing suburbia. The core shuttle-bus service is underused and under-promoted. No HOV lanes. Bad traffic snarl on almost every in-and-out-bound route during peak times (so lots of idling). No significant promotion of car pooling that I can see.

And let's not even talk about the monster sprawl out in the county (e.g., drive Maysville Road between Maysville and Buckhorn sometime and tell me where those cotton fields and pastures are going, and where the inhabitants of those new houses are going to have to drive their SUVs in order to work, to eat, to shop). We can't call Huntsville "green" and ignore the massive changes going on in the hinterland just because it's a different jurisdiction -- it's all one big environmental system.

And there still has been no responsible grappling with water issues, despite the drought and good reporting in the Huntsville Times and on Alabama Public Television, as well as Lee Roop's wakeup calls ... not to mention the widely publicized specter of electrical shortfalls this summer if the TVA can't cool all its reactors.

Escape from PGeo, part deux

Last week, I posted a short bit on using ogrinfo and ogr2ogr to convert a layer in an ESRI personal geodatabase to a shapefile. Today we'll automate the process with some python programming to extract all layers (regardless of geometry) from an arbitrary pgeo file.

The following examples assume you're running python within an FWTools shell.

Get a list of layers

The first challenge is capturing the output from ogrinfo (the list of layers) into a form we can parse. Python gives us a number of methods for issuing commands to the hosting system environment. The one to use here is popen3 from the os module since it will let us invoke ogrinfo and then work with its output (specifically stdout) as a file-like object in code. We'll read the output lines into a python list for subsequent processing.

>>> import os
>>> infilename = "ItAntMapping.mdb"
>>> child_stdin, child_stdout, child_stderr = os.popen3("ogrinfo %s" % infilenam
e)
>>> output = child_stdout.readlines()
>>> output
["INFO: Open of `ItAntMapping.mdb'\n", " using driver `PGeo' successful.\n"
, '1: whollyimprecise\n', '2: placesAdded\n', '3: placesEstimated\n', '4: places
Solid\n', '5: stretchesDangling\n', '6: stretchesFloating\n', '7: stretchesSolid
\n', '8: tpPoints\n', '9: stretchesUnlocated\n']
Now we want to cleanup this list of lines so we're left with a list of layer names (no prefixed numbers, no newline characters, no strings other than layer names). We'll iterate through each line in the output and, if a python regular expression designed to differentiate between layer names and ogrinfo's other outputs finds a match, we'll copy the relevant portion of the matched line into a new list of "layers". We'll need the python regular expression module (re) for this step.

In pseudocode:
  • set up a regular expression to match a line that begins with a string of digits, a colon, and a space, followed by a string of arbitrary length and a newline (group the string that is the layer name)
  • create a new list to hold the layer names
  • for each line in the output list:
    • attempt a regular expression match
    • if matched: append the contents of the group (the layer name) to the layers list
In python:
import re
>>> regex = re.compile('^\d+: (.*)\n')
>>> layers = []
>>> for line in output:
... m = regex.match(line)
... if m:
... layers.append(m.groups()[0])
...
>>> layers
['whollyimprecise', 'placesAdded', 'placesEstimated', 'placesSolid', 'stretchesD
angling', 'stretchesFloating', 'stretchesSolid', 'tpPoints', 'stretchesUnlocated
']
Extract each named layer

Now the easy part. Iterate through the list of layers, invoking ogr2ogr for each layer. We don't need to capture any i/o streams this time, so we'll just use the plain-vanilla system method.
>>> for layer in layers:
... os.system('ogr2ogr -f "ESRI Shapefile" %s.shp ItAntMapping.mdb %s' % (la
yer, layer))
...
>>> ^Z
C:\Users\Tom\Documents\itins>dir *.shp
Volume in drive C has no label.
Volume Serial Number is 2C47-654D

Directory of C:\Users\Tom\Documents\itins

02/04/2008 01:03 PM 156 placesAdded.shp
02/04/2008 01:03 PM 7,464 placesEstimated.shp
02/04/2008 01:03 PM 85,780 placesSolid.shp
02/04/2008 01:03 PM 100 stretchesDangling.shp
02/04/2008 01:03 PM 100 stretchesFloating.shp
02/04/2008 01:03 PM 62,420 stretchesSolid.shp
02/04/2008 01:03 PM 1,684 stretchesUnlocated.shp
02/04/2008 01:03 PM 100 whollyimprecise.shp
8 File(s) 157,804 bytes
0 Dir(s) 172,207,665,152 bytes free
Package up the code

I've incorporated these steps into a python script, along with some error handling and usage methods. It's a bit more generalized than what's shown above. Enjoy, and let me know about mistakes or suggestions for improvement.

right-to-left in blogger

Last week, I griped about blogger's announcement of right-to-left text support because the post seemed to say that a point-and-click mechanism for mixing right-to-left and left-to-right text was only available in blogs set to the newly available Arabic, Hebrew or Persian languages.

I got a comment to the effect that I was wrong and that the settings are available in the dashboard. After an inspection of the dashboard for this blog, and of every single customization tab, I conclude that I was not wrong.

You can only get compose-gui support for mixing RTL and LTR in posts if your blog language is set to Arabic, Hebrew or Persian. The relevant blogger help entry confirms this view (emphasis mine):

If you're not seeing the directionality buttons in the post editor, it's likely because right-to-left support is only available in the Hebrew, Arabic and Persian interfaces.
I stand by my original complaints.

Short-notice CFP: Contributory GIS for Historical Research

This, by way of H-HISTGEOG:

From: Mary B. Ruvane [ruvane@email.unc.edu]
Date sent: 29 Jan 2008

Apologies for the late notice and cross posting. The deadline for formal abstracts may be extended, but a statement of interest should be submitted as soon as possible.

Call for Papers: 2008 Annual Meeting of the Social Science History Association (www.ssha.org)
Session Theme: CONTRIBUTORY GIS FOR HISTORICAL RESEARCH
Location/Date: Miami, Florida, USA , 23-26 OCTOBER 2008
Proposal Deadline: February 1, 2008 -- extension requests considered
Organizers:
  • Ian Gregory, Lancaster University, UK
  • Mary Ruvane, University of North Carolina at Chapel Hill, USA
Contact: Mary Ruvane (Ruvane@email.unc.edu)

Real time contributory Web applications are fast becoming the de facto tool of choice for facilitating timely information exchange between various social groups. Established examples include wikipedia, facebook, and flickr. More recently applications for sharing geographic information have emerged, such as GoogleEarth and its companion Wikimapia, providing an unprecedented opportunity for historical researchers to collaborate on reconstructing past geographies. But how trustworthy are these burgeoning websites? Is the shared information accurate, in standard formats, well documented, or peer reviewed? To be a viable tool in support of academic research these concerns must be addressed.

This session seeks speakers who have successfully adopted contributory GIS tools in support of their historical research or teaching. Topics may include, but are not limited to:
  1. Projects utilizing contributory Historical GIS
  2. Accuracy of geographic representations
  3. Trustworthiness of shared data
  4. Data standards solutions
  5. Authenticating archival source material
  6. Peer review/moderator solutions
  7. Dealing with inferences
  8. Privacy & security issues
  9. Data contributor diparities (e.g., amateurs, geographers, historians, etc.)
NOTE: Applications for Graduate Student travel awards are due February 1, 2008 (http://ssha.org/conference/travel-grant)

Mary B. Ruvane
PhD Student
School of Information & Library Science
University of North Carolina @ Chapel Hill

Growing Pains at BP3

I've been watching Bloggers for Peer-Reviewed Research Reporting and their Research Blogging aggregator with interest since learning about it from Alun Salt last week. I was taken aback by today's announcement from Dave Munger that they are turning down applications "because the blogs aren't written in English." My brain mumbled, "show-stopper." But I read on ...

I understand this problem: "The organization as it stands now simply doesn't possess the language skills to verify that blogs written in other languages are living up to our guidelines." Munger outlines some steps aimed at building community capability for handling non-English content. Since the BP3 model is predicated upon some human evaluation of a blog's "living up to [BP3] guidelines," they'll have to rise to the challenge or go out of business.

Another problem is poorly expressed: "readers might be turned off by a site that includes many posts written in a language they don't understand." How about (and this is clearly what Munger means if you read the whole post): users will need the ability to customize language and script settings. If I may, this will need to apply both to the interface, and to the filtering of content. And please don't bind these choices together! For example, I'd want an English-language interface but content in (at least) English, French, German, Greek, Italian, Portuguese, Romanian and Spanish. This is one of the reasons we chose Plone as the platform for Pleiades: it comes localization-ready out of the box, and with a minimum of work you can manage multilingual content.

If the site hopes to mature beyond Anglophone scientific content, it's going to have to go multilingual. There's a whole world of humanistic scholarship out there just waiting to go digital; and much of it is interestingly more than English.

This reminds me, I need to write a rant about Blogger's recently announced pseudo-support for bidirectional text editing. Hint: I can't embed right-to-left Arabic, Hebrew or Persian in this post, but if I were reset my blog's language settings to one of those languages then I could mix in left-to-right English (or whatever) here. I bet I'd have to manually hack the HTML to mark the "foreign" snippets for language and script per RFC4646 too ...