Creating sitemap files for GeoNetwork

Sitemaps are a valuable way to index your content for web crawlers.  GeoNetwork is a great tool for metadata management and a portal environment for discovery.  I wanted to push out all metadata resources out as a sitemap so that content can be found by web crawlers.  Python to the rescue:

#!/usr/bin/python
import MySQLdb
# connect to db
db=MySQLdb.connection(host='127.0.0.1', user='foo',passwd='foo',db='geonetwork')
# print out XML header
print """<?xml version="1.0" encoding="UTF-8"?>
<urlset
 xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
 xmlns:geo="http://www.google.com/geo/schemas/sitemap/1.0"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9
 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd">"""

# fetch all metadata
db.query("""select id, schemaId, changeDate from Metadata where isTemplate = 'n'""")
r = db.store_result()

for row in r.fetch_row(0): # write out a url element
    if row[1] == 'fgdc-std':
        url = 'http://devgeo.cciw.ca/geonetwork/srv/en/fgdc.xml'
    if row[1] == 'iso19139':
        url = 'http://devgeo.cciw.ca/geonetwork/srv/en/iso19139.xml'
    print """ <url>
  <loc>%s?id=%s</loc>
  <lastmod>%s</lastmod>
  <geo:geo>
   <geo:format>%s</geo:format>
  </geo:geo>
 </url>""" % (url, row[0], row[2], row[1])
print '</urlset>'

Done!  It would be great if this were an out-of-the-box feature of GeoNetwork.

Using Python to parse config files

Alot of tools out there have some sort of configuration which, at run time, is read and used in the process accordingly.  When writing tools, my config file format has always been something like:

title: My Tool
# commented out line

description: This is my tool.  # another comment

Since I’m using Python for much of my scripting these days, I decided to write a small parser to handle this type of config.  So here’s what I’ve come up with:

import fileinput, re

def parse(file=None, delim=':'):
    '''
        Parses a config file formatted like:
        foo: bar
        # comments: out line
        - comments allowed (#)
        - empty lines allowed
        - spaces allowed

    '''

    d = {}

    if file is None:
        return -1

    for line in fileinput.input(file):
        if not line.strip(): # skip empty or space padded lines
            continue
        if re.compile('^#').search(line) is not None: # skip commented lines
            continue
        else: # pick up key and value pairs
            kvp = line.strip().split(delim)
            if kvp[1].strip().split('#') is not None:
                d[kvp[0].strip()] = kvp[1].split('#')[0].strip()
            else:
                d[kvp[0].strip()] = kvp[1].strip()
    return d

Seems to work well so far.  I wonder if there’s a config file standard out there?

MapServer Disaster: you have got to be kidding me

http://n2.nabble.com/FW%3A-MapServer-enhancements-refactoring-project-td2571268.html

I’m beyond words at this point.

fun with Shapelib

We have some existing C modules which do a bunch of data processing, and wanted the ability to spit out shapefiles on demand.  Shapelib is a C library which allows for reading and writing shapefiles and dbf files.  Thanks to the API docs, here’s a pared down version of how to write a new point shapefile (with, in this case, one record):

#include <stdio.h>
#include <stdlib.h>
#include <libshp/shapefil.h>
/*
 build with: gcc -O -Wall -ansi -pedantic -g -L/usr/local/lib -lshp foo.c
*/
int main() {
    int i = 0;
    double *x;
    double *y;

    SHPHandle  hSHP;
    SHPObject *oSHP;
    DBFHandle  hDBF;

    x = malloc(sizeof(*x));
    y = malloc(sizeof(*y));

    /* create shapefile and dbf */
    hSHP = SHPCreate("bar", SHPT_POINT);
    hDBF = DBFCreate("bar");

    DBFAddField(hDBF, "stationid", FTString, 25, 0);

    /* add record */
    x[0] = -75;
    y[0] = 45;
    oSHP = SHPCreateSimpleObject(SHPT_POINT, 1, x, y, NULL);
    SHPWriteObject(hSHP, -1, oSHP);
    DBFWriteStringAttribute(hDBF, 0, 0, "abcdef");

    /* destroy */
    SHPDestroyObject(oSHP);

    /* close shapefile and dbf */
    SHPClose(hSHP);
    DBFClose(hDBF);
    free(x);
    free(y);

    return 0;
}

Done!

Less Than 4 Hours

A benefit of open source.

< 4 hours.  That’s how long it took to address a MapServer bug in WMS 1.3.0.  Having been on the other side of these many times, it’s gratifying to bang out quick fixes as well.

Committing often 🙂

MapServer Code Sprint Progress

MapServer action from the Toronto Code Sprint 2009:

Paul has full details on his blog (day 1, day 2, day 3, day 4, post-mortem).  More details from Chris (day 1, day 2, day 3, day 4).  Also check out some pictures from the event.

Personally, I was happy to bang out fixes for:

  • optionally disabling SLD for WMS (#1395)
  • support for resultType=hits for WFS (#2907)
  • working code for WFS spatial filters against the new GEOS thread safe C API (#2929)
  • WFS 1.1.0 supporting OWS Common 1.0.0 instead of 1.1.0 (#2925)
  • The beginnings of support for correct axis ordering for WFS 1.1.0 (#2899)

Good times!

UPDATE 12 March 2009: here’s a Camptocamp report of the event.

TO Code Sprint is upon us

The code sprint starts Saturday, and there’s a good turnout of folks from the various OSGeo projects.

If you’d like to participate, you can join us on IRC at #tosprint and be there in spirit.

MapServer 5.4.0-beta1 is out

Check it out.  A few RFCs addressed, among them OGC WMS 1.3.0 server support.

WMS 1.3.0 now in MapServer trunk

Fresh in svn trunk, MapServer now has WMS 1.3.0 Server support and will be part of the forthcoming 5.4 release.

It will interesting to see the use WMS 1.3.0 gets, given the significant changes from 1.1.1.

Great work Assefa!

OWS Metadata Matters

This has seemingly been the theme for me in the last few weeks.  From publishing to discovery, lack of metadata in OWS endpoints results in increased metadata management away from source, as well as crappy search results.

So here’s some friendly advice:

Service Metadata

  • fill out title, abstract (representative of the OWS as a whole) with descriptive metadata
  • fill out keywords to categorize the service.  If possible, use a known thesaurus, or one specific to your organization.  Don’t use keywords like “OGC”; we already know it’s an OGC service from the get-go by interacting with it
  • fill out contact information.  OWS Common defines ServiceProvider metadata constructs, so if your organization has a service provider dishing out your OWS, they belong in this metadata.  This is a contact person for the service itself, not the data
  • fill out Fees and AccessConstraints.  If there aren’t any, use the term “None”
  • the OnlineResource for Service Metadata might be some website, not the URL of the service itself (we already get this from the OperationsMetadata)

Content Metadata

  • fill in title, abstract and keywords in the same manner as above, specific to the given Layer/FeatureType/Coverage/ObservationOffering.  A title like “ROAD_1M” doesn’t cut it
  • your data comes with an FGDC or ISO 19115 XML document already, right?  🙂 Use MetadataURL to point to the XML document.  Smart catalogues will harvest this too and associate it with the resource
  • WMS DataURL: if the data can be downloaded online (tgz/zip/etc.), point to it here.  Or, put a pointer to an access service like WFS/WCS/SOS
  • WMS Layer Attribution: this provides reference to the content provider (URL, title and LogoURL).  Filling in LogoURL is neat as catalogues can display this when users search for content.  If possible, use an image of smaller dimensions so as to display as a thumbnail
  • Last but not least, bounding boxes.  Whether your OWS software automagically calculates these per layer on the fly, or you can override these and set before runtime, please set spatial extents accordingly.  This improves searching spatially by leaps and bounds.  Don’t settle for the often used default of -180, -90, 180, 90 unless it is really a global dataset

From here, OGC Catalogues will be able to harvest your metadata and provide useful search results.  For wider spread discovery, throw an OpenSearch definition in front of your CSW.  Wrap your OWS endpoints in KML/GeoRSS documents (Geo Sitemaps too), and you’ll power mainstream use of your stuff.

Funeral news from newrestfunerals.co.uk said:
‘Mourners buried under a bridge for the lost’

A group of members of London’s funeral procession paid their respects to a man and his mother at the Royal Greenwich Cathedral where they lost a loved one to cancer

The two sisters and their daughter, who was buried the same night, were buried at the Royal Greenwich Cathedral, where he passed away last weekend.

Grieve: The two sisters and their daughter, who was buried the same night, were buried at the Royal Greenwich Cathedral, where he passed away last weekend

At the bottom of the stairs of the church, you can see a cemeterie. It looked like an Indian wedding feast to the first group and there were candles and flowers on the ceiling.

The funeral family were in their homes in London but only a brief family procession was in progress.

The group had gathered in a cemeterie outside the cathedral and the coffin was handed over to the funeral director.

The funeral director also handed in to the British Transport Police and they placed the coffin in a safe house at Queen’s Park, New York.

The couple went to the funeral home and their body was found on the day of the operation.

Casket search: ‘Mourners raised at arms pace’

No word on a cause of death or why they died but the couple’s family said in a statement.

Bye bye useless searches!

Modified: 7 November 2022 17:45:12 EST