I heart this WMS

I’ve written my share of catalogues, Capabilities parsers, map clients, and context import/export tools to know that having good example WMS instances is paramount in testing functionality and building features. I usually have a handy list of WMS servers which I constantly use when writing code.

Bird Studies Canada provides WMS access to their various bird distribution and abundance data. BSC has taken every effort to:

  • populate their Capabilities metadata exhaustively. Title, abstract, keywords, and even MetadataURL pointers to FGDC XML documents for all layers. And _full_ service provider metadata (including Attribution, which is great for displaying Logo images, etc.)
  • return GetFeatureInfo in both GML and HTML for prettier responses

This WMS is always at the top of my testing list, as well as my first response when people ask to see an existing WMS example which is well constructed, and serves catalogues and search demos very well indeed.

Kudos to BSC!

You know you’re getting old when…

I embarked on a Google search to find information about Polygon statistics, and low and behold, I posted this on my website years ago.

Goodbye memory!

Making W*S suck less

I’m starting to work on contributing SOS and OWS Common support in OWSLib, a groovy and regimented little GIS Python project.

So far so good; some initial implementations are done (committing soon hopefully, writing tests around these).  I think this will add value to the project, seeing that SOS 1.0 has been around long enough to start seeing implementations.  And the OWS Common support will act as a baseline for all calling specs/code to leverage.

And it’s been a nice journey in Python for me so far.  Another thing I like about this project is the commitment to testing — awesome!

GDAL Saves the Day Again

A piece of work I help out with involves the visualization and access of hydrometric monitoring data over the Web. Part of this involves the data management and publishing of voluminous databases of monitoring information.

We use Chameleon for basic visualization and query of the data. Behind the scenes, we run a slew of complex processes (shell scripts via cron) to output the data in a format that can be understood by MapServer (which we use to publish WMS layers). The processes work across many disparate database connections, so outputting them to shapefiles and accessing them locally helps with performance in web mapping apps. ogr2ogr is used exclusively and extensively for the access and format translation.

Well, today I found out that an effort began to write a bunch of scripts to additionally output OGC KML. Thank goodness things didn’t get very far, because the following addition to our processes:

$ ogr2ogr -F KML foo.kml bar.ovf -dsco NameField=NAME -dsco DescriptionField=COMMENT

…worked like a charm, and put a big smile on people’s faces!

So now, OGC KML is also supported for visualization in Earth browsers. Just like that.

Output styles are relatively simple; I’m thinking a -dsco like:

-dsco LayerStyle=LayerName,styles.kml#mystyle

…would point to an existing (local or remote) KML style document style ID via XPointer, i.e.:

<styleUrl>somefile.kml#mystyle</styleUrl>

Of course the default behaviour would be in place if this -dsco is not defined. I’ll see what the GDAL KML gurus think about this.

At any rate, once again, thank you GDAL for being an uber-utility for day-to-day GIS tasks. Happy faces everywhere!

pivoting in Python

I needed to do some pre-processing of some data which involved transposing column names to values. The condition was that the value for each respective column (frequency count) had to be > 1.

My input was a csv file, and my goal was an output csv file which would feed into a batch database import process.

ID,DA,NL,PHENOM1,PHENOM2,PHENOM3,PHENOM4
233,99,44,0.00,27.00,12.00,0.00

The other interesting bit was that only a range of columns applied to the condition; the other columns represented ancillary data.

Enter Python:

#!/usr/bin/python

import sys
import csv

# open file and read headers
fPhenomenon = open("phenomenon.txt","r")
sHeaders    = fPhenomenon.readline().replace(r'"','')
aHeaders    = sHeaders.split(",")

# feed the rest to csv
csvIn  = csv.reader(fPhenomenon)
csvOut = csv.writer(sys.stdout)

for sRowIn in csvIn:
    aRowOut = []
    aPhenomenon = []
    aRowOut.append(sRowIn[0]) # procedure ID
    aRowOut.append(sRowIn[1]) # major drainage area ID
    for nIndexTupleVal, tupleVal in enumerate(sRowIn[3:-1]):
        if (float(tupleVal) > 0): # phenomenon measured at least once
            # add phenomenon name to list
            aPhenomenon.append(aHeaders[nIndexTupleVal+3])
        # add phenomenon list to record
        aRowOut.append(",".join(aPhenomenon))
    csvOut.writerow(aRowOut)

Notes

  • hooray for raw strings!
  • enumerate() is great and saves you the trouble of declaring your own counter
  • like any language, modules/libraries makes things so easy to work with
  • I wish the header stuff was a bit cleaner (I should look further into the csv module w.r.t. headers

That’s my hack for the day. Have a good weekend!

UPDATE: ah, the csv module has a .next() method, which can be used instead of the shoemaker attempt I made above to regularize / split / store the header list.

Documenting the History of MapServer

Inspired by the recent thread on FOSS4G history, I started an effort to document MapServer’s history, from its beginnings in the mid-1990s. Check out the progress we’ve made so far. If there’s anything missing, or in error, feel free to contribute!

FOSS4G History

Mateusz posted a link to an interesting topic on osgeo-discuss. I think it’s a great idea to document the history of geospatial and open source, and I echo Dave’s comments on how Wikipedia would be an ideal home for documentation and maintenance.

Perhaps the best way to go about this would be for the various projects on Wikipedia (MapServer, GDAL, GeoTools, GRASS, etc.) to document their respective histories, and allow the main Wikipedia OSGeo page to link to them accordingly.

Thoughts? Are there better alternatives than Wikipedia? Should projects document history on their own respective websites, which Wikipedia then references

Is REST “faster”?

I was in a REST/Web2.0 workshop, and someone asked how REST, since through HTTP, which is a stateless protocol, is any faster than other, or previous approaches.

I’m not sure that REST does anything to speed up HTTP’s request/response mechanisms; but using AJAX surely enhances the user experience with perceived responsiveness given the nature of AJAX by doing things asynchronously.

Or is there more to it?

MapServer 5.2 released

Fresh off the press, MapServer 5.2 has been released. A total of 196 issues were fixed in 5.2, as well as a number of enhancements. Sources can be fetched from http://mapserver.gis.umn.edu/download/current.

Good job everyone!

looking forward to GeoWeb

I haven’t been to the GeoWeb conference in a couple of years, and given all the changes and advancements in the geospatial web over that time, this conference should prove to be quite interesting!

I’m also looking forward to attending the Open Source and Geo-Semantics and REST/JavaScript/Web 2.0 workshops.

If you’re going to GeoWeb, looking forward to seeing you there!

Modified: 14 July 2008 13:10:42 EST