pygeometa is a handy little metadata generator tool which is flexible, extensible, and composable. Command line, or via the API, users can generate config files, or pass plain old Python dicts, ConfigParser objects, etc.
We’ve just released 0.2.0 which supports WMO Core Metadata Profile output, as well as better multilingual support. At this point we’re embarking on breaking changes in master led by moving to YAML as the configuration format.
Given pygeometa is pre-1.0 in theory changes can be breaking without support. Still, I’ve cut a 0.2 branch in case anyone’s existing workflows depend on the (now) old pygeometa functionality.
As always, bug reports, feature requests are more than welcome. Hopefully the new enhancements will make metadata management even easier for agile workflows.
There was lots of discussion on refactoring pycsw’s filter support to enable NoSQL backends. While we are still in discussion, this enhancement should open the doors for any backend (ElasticSearch, SOLR, a GitHub repository, another API, etc.). In addition, Frank Warmerdam started writing a pycsw OGR backend to support CSW exposure of the Planet Scenes API via OGR. This also presents exciting possibilities given OGR’s support of numerous underlying formats. Frank also provided valuable advice and feedback on interacting with pycsw as a developer/contributor. Thank you Frank!
GeoHealthCheck
There has been long discussion on a next generation GHC including a renewed architecture with core work on the model as well as an API. A basic architecture has surfaced as a result which focuses on having the UI exclusively work with the API, as well as a plugin framework which Just van den Broecke has started working on. I also worked on tagging which will be the last piece before cutting a release and forging ahead on the new architecture.
pygeometa
The focus on pygeometa is now on renewing the MCF format from .ini to YAML. Initial pieces are completed in a dev branch which I plan to merge once we clear current issues and cut a stable release.
Summary
While I couldn’t get to everything I planned for, I think significant steps were made in moving the above projects forward along their respective roadmaps. It was also great to see some familiar faces as well as new contributors and projects!
To know if this project was going to have good results and in the others also that they propose me, not only do I trust my professional ability, it is also advisable to go to a tarot reading, it is the best to know what will happen if you have doubts about something.
The best of all is that usually the first time you can connect to the internet because you can access online tarot card reading, so you don’t have to worry about traveling somewhere that seems dangerous. Be careful with everything that can happen to you in life. it is safer to trust a tarot reading. do not complicate yourself in following blindly and rather follow what God tells us through the tarot.
It’s been quite awhile since I did one of these, so here goes. Some notables from 2016:
pycsw: the release of 2.0 “Doug” provided the first OGC compliant CSW 3.0 implementation, as well as Python 3 support. These two major enhancements provide the long term backbone for the project moving into the future
GeoHealthCheck: GHC provided the inspiration for the Harvard Hypermap project. In addition, the project is being used in numerous internal environments and has caught the itch of Just van den Broecke! It’s amazing what happens when you put a UI on top of workflows
PyWPS: version 4.0 was released which represented a major update/rewrite/licence change of the project. For WOUDC, we’ve implemented PyWPS as part of real-time workflows for data validation. Finally, the project has moved along the OSGeo incubation process nicely and is hours away from being submitted for project graduation
pygeometa: the little metadata creation tool now supports the WMO Core Metadata Profile
GeoNode: now an OSGeo project!
health
another year (circa 2012) of not smoking
I lost 35 lbs in 2016 thanks to a true, deep commitment to the Greek/Mediterranean diet. A huge thank you goes out to Olive Tomato, which has provided awesome recipes and advice
For 2017:
pycsw: look for some big improvements to our test suite, as well as ElasticSearch support
pygeometa: move to YAML as the configuration format
PyWPS OSGeo incubation: we’re almost there! Hoping to complete this by spring
GeoHealthCheck: implementing a GHC API and plugin mechanism are two key enhancements which we will hopefully tackle at the OSGeo Code Sprint in Daytona Beach. As well, as following the developments of newly formed OGC Quality of Service and Experience Domain Working Group
In 1999 I went to a GIS conference and watched a vendor presentation on their WMS product. A key feature was being able to reproject data on the fly. This appealed to me as this was early days of JavaScript development for me, along withe Mike Adair (which eventually, much later, led to the proj4js project). Thousands and thousands of projections one can choose from a select box and boom — coordinate transformation for your WMS layer.
I sat in shock for the remainder of the presentation thinking of the complexity and all the math involved. After their presentation, I mentioned this to the presenter offline, who replied “it’s very hard and complex work, yes”.
Fast forward around 2002 and it turns out they were indeed using proj.4 which initially made me think, “ah, that’s easy, then”.
Ah, youth.
These days, I would say well it’s not that easy. Integration, upstream changes, versions, packaging and deployment. Moving parts. Different issues. It’s smart, strategic and preferable not to re-invent the wheel and use existing libs, but the work certainly doesn’t end there.
(For what it’s worth, the vendor [it doesn’t matter who they are] and their product are still around and going strong)
Liv Pure Reviews: The Key to Pure Water
Liv Pure Reviews are a testament to the efficacy of their water purifiers in ensuring the availability of pure and contaminant-free water. Let’s delve deeper into what sets Liv Pure apart and makes it a top choice for many consumers.
Advanced Filtration Technology for Clean Water
Liv Pure employs advanced filtration technology to eliminate harmful substances from the water. Their purifiers are equipped with a multi-stage filtration process, including activated carbon, sediment filters, and UV purification, ensuring that water is free from impurities such as chlorine, lead, bacteria, and viruses. This results in fresh-tasting, crystal-clear water that is safe for consumption.
Ergonomic and User-Friendly Designs
Liv Pure takes pride in designing purifiers that are not only functional but also user-friendly. Their products are equipped with intuitive controls, easy filter replacement mechanisms, and space-saving designs. Whether you need a wall-mounted unit for a compact kitchen or a freestanding one for a spacious dining area, Liv Pure has options to suit your needs.
Certified and Trusted Brand
When it comes to the safety of drinking water, trust is paramount. Liv Pure has gained certifications and recognition from reputable organizations, assuring consumers of the quality and effectiveness of their products. Certifications from bodies like NSF, WQA, and ISI speak volumes about the brand’s commitment to providing safe drinking water.
Cost-Effective Solution for Clean Water
While bottled water might seem like an alternative, it can be expensive and environmentally detrimental. Liv Pure offers an economical solution, as their water purifiers significantly reduce the need for bottled water. Investing in a Liv Pure water purifier not only saves money but also helps reduce plastic waste, contributing to a greener planet.
Phentermine Alternatives: A Safer Approach to Weight Loss
In the pursuit of achieving a healthy and fit body, many individuals turn to weight loss supplements to aid their efforts. Phentermine, a prescription-only appetite suppressant, has been a popular choice for many years. However, due to its potential side effects and the need for a doctor’s prescription, people are seeking safer alternatives that can provide similar benefits without the risks. In this article, we will explore some effective phentermine alternatives that can assist individuals in their weight loss journey.
Understanding Phentermine
Phentermine is a central nervous system stimulant that works as an appetite suppressant. It is typically prescribed for short-term use to help individuals with obesity control their eating habits and lose weight. The drug works by increasing the release of neurotransmitters that reduce hunger cravings, making it easier for users to consume fewer calories.
Despite its effectiveness, phentermine comes with potential side effects that may deter some individuals from using it. These side effects can include insomnia, dry mouth, elevated blood pressure, and even addiction in some cases. Additionally, due to its classification as a controlled substance, obtaining a prescription for phentermine can be challenging for some.
Alpilean Reviews 2023: Your Ultimate Guide to Natural Supplements
In today’s fast-paced world, maintaining optimal health and well-being has become a top priority for many individuals. As a result, the demand for natural supplements has seen a significant surge in recent years. Among the various options available in the market, Alpilean has garnered attention as a promising supplement for enhancing overall health. In this article, we will delve into Alpilean Reviews 2023, exploring its benefits, ingredients, and potential side effects. Let’s embark on a journey to uncover the truth behind this popular supplement.
It’s been almost two years since GeoHealthCheck was initially developed (en route to FOSS4G in PDX). Since then, GHC has been deployed in numerous environments in support of monitoring of (primarily) OGC services (canonical demo at http://geohealthcheck.osgeo.org). If you really want to support the progress and improvement of your health and your physical condition, first of all you should visit firstpost.com/ and find out about the best natural dietary formula that you and your body can try.Taking care of your sexual health is important for your overall physical and mental well-being. Regular STI testing and using contraception can reduce the risk of contracting or spreading infections. Addressing sexual concerns with your healthcare provider can also improve your sexual health and satisfaction like using male enhancement pills.
Project communications have been relatively low key, with GitHub issues being the main discussion. The project has setup a Gitter channel as a means to discuss GeoHealthCheck in a public forum more easily. It’s open and anyone can join. Come join us on https://gitter.im/geopython/GeoHealthCheck!
Do not forget to always be aware of your body’s health status checkups, but if you want to lose weight almost instantly you should visit https://www.amny.com/sponsored/exipure-reviews/ and find out about the pill that will change not only your body but also your entire life.
As your body is important Im sure your sexual life is also important so if you are having troubles with that because of you are overweight you should try over the counter male enhancement pillsthat may help with sexual problems, such as erectile dysfunction, premature ejaculation, or low sex drive.
It’s seem like ages ago since the initial QGIS MetaSearch announce and call for help in 2014. Inspired by Sourcepole’s FOSS4G 2015 presentations, here’s a brief status update:
MetaSearch is now a core plugin shipped with QGIS (!!)
A sincere thanks to Richard Duivenvoorde, Angelos Tzotsos, Alexander Bruy, Tim Sutton and the rest of the QGIS developers/community for helping bring MetaSearch into QGIS to help move the search / discovery workflow forward!
As far as a roadmap, here’s a laundry list of future items:
OWSLib dependency cleanup: currently we manage a copy of OWSLib in QGIS proper. This is because there is a gap in packaging across supported platforms. It would be great to have approved OWSLib packages (see issue)
Metadata publishing and management: it would be great to manage and publish better metadata directly from MetaSearch. The end result will be a more streamlined, deeper integration and support of metadata within QGIS. No movement on these yet, but there are QEPs proposed
ISO based servers: MetaSearch supports the OGC Core CSW model. Most CSWs implement the CSW ISO Application Profile which supports more detailed metadata
add data functionality: it would also be very great to directly add raw data from a metadata record’s access links into QGIS. We already support this for OGC services, and supporting direct data downloads to visualize in QGIS would complete the “publish/find/bind” workflow
Do you have any enhancements you would like to see in MetaSearch? Feel free to bring them in the MetaSearch issue tracker or the QGIS mailing lists! Do you have fixes or features to contribute? Feel free to fork and send pull requests!
CSW has a good presence on the server side (pycsw, GeoNetwork Opensource, deegree, ESRI Geoportal are some FOSS packages). From the client side, OWSLib is the go to library for Python folks. QGIS has MetaSearch (which uses OWSLib).
Beyond symptom relief, reviews may mention how Prostadine has positively impacted users’ overall quality of life. Improved sleep, reduced anxiety about prostate health, and the ability to engage in daily activities without interruptions are often highlighted.
At the same time, it’s been awhile since I’ve delved into deep JavaScript. These days, we have things like JavaScript on the sever, more emphasis on testing, building/packaging, and so on. You can do it all with JavaScript if you want.
Wouldn’t it be great to have a generic CSW JavaScript client? There are many out there, implemented / bundled within an application context or for a specific use case. But what about a generic lib? Kind of like OWSLib, but for JavaScript.
Say hello to csw4js. The main goal here is to build an agnostic CSW client for JavaScript that can work with/feed:
– geospatial libs like OpenLayers, Leaflet
– web frameworks like jQuery, AngularJS, and so on
– JavaScript muscle for namespacing, structure, etc.
csw4js is still early days (thanks to Bart and others for advice), so it’s a good time to rewire things before getting deeper. Interested in helping out? Get in touch!
It’s great to see QGIS rising to fame in terms of a great desktop GIS tool. Part of what makes QGIS so great is the vast ecosystem of plugins. And Python support makes it easy to write plugins fast, especially atop existing libraries.
CSW client support in QGIS has been via the excellent CSWClient plugin. The MetaSearch project forks CSWClient and will make the following initial improvements:
QGIS 2.0 support
added Catalogue types in addition to CSW (JSON APIs, OpenSearch, etc.)
XML highlighting
documentation using Sphinx
i18n/continuous localization for both UI and docs, using Transifex
code maintenance (easy to deploy for developers, automated build, packaging and dependency management)
As the number of pycsw deployments increase, we’ve started to keep a living document of live deployments on the pycsw wiki. Being a geogeek, naturally I said to myself, “hmm, would be cool to plot these all on a map”. Embedding maps has become easier than ever, and projects like MapServer and GeoServer have cool maps right on their homepages, which demo their maps against a theme like the next FOSS4G conference, etc.
pycsw is a bit different in that it doesn’t do maps, but certainly catalogues them and makes them discoverable via OGC:CSW, OpenSearch and SRU. And putting a sample GetRecords output on the website as a demo is boring. So mapping live deployments seemed like a cool idea for a quick hack with reproducible workflow so it doesn’t become a pain to keep things up to date.
The pycsw website is managed using reStructuredText and Sphinx; source code, issue tracker and wiki are hosted on GitHub. The first thing was to update each deployment on the wiki page with a lat/long pair (the lat/long pair being loosely based the location of the CSW itself, or the content of the CSW. Aside: it would be cool if CSW Capabilities XML specified a BBOX like WMS does to give folks an idea of the location of records).
After this, I wrote a Python script to fetch (and cache) the raw wiki page content. Then, using Leaflet, setup a simple map and create markers foreach live deployment.
So now I have a JavaScript snippet, now how do I add this to a page? Using the Sphinx Makefile, I update the html target to run the Python script and save it to an area where I embed it using a rST include.
That’s pretty much it. So now whenever the live deployment page is updated, a simple make clean && make html will keep things up to date. Reproducible workflow!
UPDATE 26 January 2012: the benchmarks on the improvements below were done against my home dev server (2.8 GHz, 1GB RAM). Benchmarking recently on a modern box yielded 3.6 seconds with maxrecords=10000 (!).
pycsw does a pretty good job of implementing OGC CSW. All CITE tests pass, configuration is painless, and performance is great. To date, testing has been done on repositories of < 5000 records.
Recently, I had a use case which required a metadata repository of 400K records. After loading the records, I found that doing GetRecords searches against 400K records brought things to a halt (Houston, we have a problem). So off I went on a performance improvement adventure.
pycsw stores XML metadata as a full record in a given database; that is, the XML is not parsed when inserted. Queries are then done using XPath queries using lxml and called as embedded SQL functions (for SQLite, these are realized using connection.create_function(); for PostgreSQL, we declare the same functions via plpythonu. SQLAlchemy is used as the DB abstraction layer.
Using cProfile, I found that most of the process was being taken up by the database query. I started thinking that the Python functions being called from the database got expensive as volume scaled (init’ing an XML parser to evaluate and match on each and every row).
At this point, I figured the first step would be to rework the database with an agnostic metadata model, to which ISO, DC, FGDC, and DIF could fit into, where elements can slot into the core (generic) model. Each profile then maps the queryables to (instead of an XPath) a database column in the codebase.
At this point, I loaded 16000 Dublin Core documents as a first test. Results:
– GetCapabilities and GetDomain were instant, and I mean instant (these use the underlying database as well)
– GetRecords: I tried with and without filters. Performance is improved (5 seconds to return 15700 records matching a query [title = ‘%Lor%’], presenting 5 records)
This is a big improvement, but still I thought this would have been faster. I profiled the code again. The cost of the SQL fetch was reduced.
I then ran tests without using sqlalchemy in the codebase (i.e. SQL scripting as opposed to the SQLAlchemy way). I used the Python sqlite3 module, and that’s it. Queries got faster.
Still, this was only 16000 records. As well, I started thinking/worrying about taking away sqlalchemy; it does give us great abstraction into different underlying databases, and helps us greatly with transactional (insert/update/delete).
Then I started thinking more about bottlenecks and the fetch of data. How can we have fast queries and keep sqlalchemy for ease of interacting with the underlying repo??
Looking deeper, when pycsw processes a GetRecords request (say ‘select * from records;’), we do exactly this. So say the DB has 100K records, sqlalchemy gets ALL 100K records. When I bring them back from server/repository.py to server/server.py, that’s an sqlalchemy object with 100K members we’re working with. Then, in that code, I page through the results using maxrecords and startposition as requested by the client / set by the server processing.
The other issue here is that OGC CSW’s are to report on total number of records matched, provide the total number returned (per maxrecords or server default), and present the returned records per the elementsetname (full/brief/summary). So applying a paging approach without getting the number of records matched was not an option.
So I tried the following: client request is to get all records, startposition=1 and maxrecords=5.
– one query which ONLY gets the COUNT of records which satisfy the query (i.e. ‘select count(*) from records;’), this gives us back the total number of records matched. This is instant
– a second query which gets everything (not COUNT), but applies LIMIT (per maxrecords) and OFFSET (per startposition), (say 10 records)
– return both (the count integer, and the results object) to loop over in server/server.py:getrecords()
So the slicing is now done in the SQL which is more powerful. So on 100K records, this approach only pushes back the results per LIMIT and OFFSET (10 records).
Results come back in less than 1 second. Of course, as you increase maxrecords, this is more work for the server to return the records. But still good performance; even when maxrecords=5000, the response is 3 seconds.
So the moral of the story is that smart paging saves us here.
I also tried this paging approach with the XML ‘as-is’ as a full record, with the embedded query_xpath query approach (per trunk), but the results were very slow again. So the embedded xpath queries were hurting us there too.
At this point, the way forward was clearer:
– keep using sqlalchemy for flexibility; yes, if we remove sqlalchemy it will improve performance, but I think the flexibility it gives us, as well as we still get good performance, makes sense for us to keep it at this point
– update data model to deconstruct the XML and put into columns
– use paging techniques to query and present results
Other options:
– XML databases: looking for a non-Java solution, I found Berkeley DB XML to be interesting. I haven’t done enough pycsw integration yet to assess the pros/cons. Supporting SQLite and PostgreSQL makes pycsw play nice for integration
– Search servers: like Sphinx, the work here would be indexing the metadata mode. Again, the flexibility of using an RDBMS and SQLAlchemy was still attractive
Perhaps the above approaches could be supported as additional db stores. Currently, pycsw code has some ties to what the underlying data model looks like. We could add layer of abstraction between the DB model and the records object model.
I think I’ve exhausted the approaches here for now. These changes are committed to svn trunk. None of these changes will impact end user configuration, just a bit more code behind the scenes.