This blog is moving to my personal web space. I want to write about more than just programmers notepad on my blog (as I do) and I think that my personal space is a better home for it. I will still, obviously, write about PN and in fact aim to write a bit more about it - time will tell.
I won't be importing all the old content because it will keep its home here on pnotepad.org, but I will be disabling comments and trackbacks for these old entries.
See you there: untidy.net/blog
A list of things I must remember to do at the moment:
Merge my scintilla changes with the trunk again so that I can update the build used in PN2. Changes are:
* Some pascal lexer changes.
* Change to provide on-paste line-endings conversion.
Merge some docking framework bug fixes.
Finish find in files code.
Project settings work (lots of work here).
Continue PN docbook documentation effort.
Finish my MSc thesis.
Move my blog to WordPress or something with better comment-spam prevention techniques. The blacklist is just not enough, and I don't really want to upgrade to MT 3. I think at the same time I'll also move my blog onto untidy.net, my personal web-space. Then I'll feel better about using it to write about general "stuff" as well as PN.
Jim Hugunin, the inventor of IronPython (Python for .NET) has got a blog and for his first post gives examples of using IronPython as an interactive scripting environment for .NET. I'd never really thought before about just how useful the interactive aspect of the language could be, but it's excellent. This combined with tools like SnippetCompiler from Jeff Key could be incredibly useful in cutting down the time taken to experiment with code.
I downloaded IronPython recently and used it to briefly prototype a piece of code for experimentation purposes, and it's definitely cool. I love the simplicity of Python code, and being able to import all the really cool .NET framework classes and use those with Python code is sweet. Now what we need is for the base Python libraries to be available alongside the .NET framework in one place.
One thing that seems to be missing at the moment is a pycl tool, an actual compiler. I could only find the interactive run-time with the distribution, but hey - it's only an alpha!
From the "what's new and good in PN2 0.5.5" department:
In PN2 0.5.5 you can now get the project and project group paths when building up tool parameters. You can't find these constants in the interface yet, as they use a new scheme that has only just been introduced. The next version will make this more obvious but for now, here are the constants:
Remember all paths include a trailing path separator. If no project or project group is open and valid then these constants evaluate to nothing.
In the future I will be moving more towards this more expressive type of constant and away from the old type, %x constants prove difficult to remember.
At work we have a set of inter-dependant components (a lot of which I have written) that are used in multiple products. It's important to keep all shipped versions of components so that when debugging we can use the correct component parts. There are several problems to overcome when working with components like this:
- the components do not change all at once, but because they have dependencies on each other they need to be built as a set, or we have complicated version mapping issues.
- the version numbers: we could either version all the components as a set, but this hides what components have been updated in the builds. We chose to not tie the version numbers, but this means that we need to be able to identify the correct set of versioned components.
- keeping the sets identifiable for debugging at a later stage.
- changing references to assemblies in visual studio .net is a pain, especially if the project is in source control.
I needed to develop a strategy for use and versioning because managing this lot manually was becoming a time-waster. This post details what I came up with, and in response I hope to hear other people's solutions to the same problem.
We keep a shared components directory on a server, called \\Server\SharedComponents (names changed to protect the innocent). This directory holds all shipped versions of the components, and also the current development ones. Inside that directory, there is a directory called Head. Inside head are placed the most recent builds of components that work as a set. These components are all built at the same time using a NAnt script. In fact, that's a bit of a lie because some Managed C++ bits won't build with NAnt at the moment so are done manually but in an ideal world...
The projects that we work on that use these components use the head build while in development. When the product is preparing for a release, we use the following steps:
- Copy the known-good set of components into a dated directory, e.g. 20040803
- Branch the product in source control and use a python script to check the projects out, change all the project references to point at the now stable components distribution, and check the projects back in again (with a suitable change log message, of course!).
We now have a fixed set of components for one release, and with very little effort all of the references are changed to use this build. In the future that branch can be checked out and all will be as it was on the day of release!
This solution is not perfect, identifying a set of components can be slightly hard and there are still issues with keeping change logs for component sets up-to-date but it's a step in the right direction (for us, at least!).
So tell me, how do you solve this problem?
p.s. I'll post the python script here at some point for those interested.
First the good news: the latest version of PN2 has been released to sourceforge and the pnotepad.org website. It has lots of good new features, and loads of bug fixes for annoying problems with previous releases.
Then the bad: there is a bug in the global tools code which means that closing the options dialog with global tools configured crashes PN 2. You don't lose any settings, but may lose data. There will be another release of PN 2 in a couple of days to fix this problem.
From the "what's new in PN 2 0.5.5" department, is the "middle-click on a tab to close it" feature. This is now my favorite feature (for at least the next ten minutes). Go on, try it, you'll soon be addicted. The only problem is that now any application that doesn't support this feature will irritate you. a lot.
Firefox does support it - wahey!
So, try it; love it; petition others to implement it and make the world a better place.
This is the first in a series of what's new in PN 2.5.5 style articles. maybe.
I am writing my MSc thesis at the moment, and am trying to do it in docbook - the same way I'm trying to write the PN documentation (slowly!). Transforming docbook into PDF takes an impressive toolchain, XML & XSL -> XSL:FO -> PDF = XML editor, docbook stylesheets, xsltproc, fop (java + xalan + saxon + apache fop). Just a couple of tools, you might think - but they took me ages to collect and configure into a working setup.
I'm trying to write this explanation to keep a note for myself on how I did things. The various sections are in no particular order, so my apologies if it seems to be a ramble.
Getting the tools
You can get xsltproc for windows from the website referenced in the References section. You need to retrieve libxml2 (which contains libxml2.dll, xmlcatalog.exe and xmllint.exe), libxslt (libxslt.dll, libexslt.dll and xsltproc.exe), iconv and zlib. Download all the zips, and extract the .dll and .exe files. These are scattered around the bin and lib directories inside the zip files.
To transform the DocBook XML into XSL:FO XML (for later conversion to PDF), you need to get the DocBook XSL stylesheets. These need to be extracted and stored in a sensible location on your hard-disk.
FOP is the tool from the Apache XML project that converts from an XSL:FO formatted XML file (a file full of layout instructions) into other formats such as PDF. FOP is a java tool so you'll need a Java runtime. I downloaded the binary version, and also needed to download JAI in order to get picture insertion working. You need to install JAI and then the FOP tool will pick it up automatically. The FOP site also references something called JIMI but I couldn't get this working.
If you're not really into the world of XML (I feel like I know a good bit about it, and am barely scratching the surface compared to many others) then you may not really know a lot about DTDs, schemas and catalogs. Simply put, the DTD and Schema things are often used by the tools listed above to validate XML content - they define a contract for the content of XML files. If you just run these tools without a catalog, then they will attempt to retrieve these contract files from the internet. This takes a long time and really slows down the conversion process.
It took me ages to work out how to get catalogs to work properly with xsltproc, there was no windows documentation so I pieced it together from e-mails and snippets found using google.
Creating the Catalog
This shows how to create a simple catalog that points to a local copy of the docbook DTD. First you need to download the DTDs, which there are links to in the references section below. I suggest placing them in a directory structure like:
xml\docbook\4.3\dtd <-- DTDs for docbook 4.3 in here
The DTDs are referenced in the xml files you are working with by a reference name, like for example:
-//OASIS//DTD DocBook XML V4.3//EN. The catalog mechanism works by mapping from this reference to a file on your disk.
Here is a simple catalog file containing a mapping for this DTD:
<?xml version="1.0"?> <!DOCTYPE catalog PUBLIC "-//OASIS//DTD Entity Resolution XML Catalog V1.0//EN" "http://www.oasis-open.org/committees/entity/release/1.0/catalog.dtd"> <catalog xmlns="urn:oasis:names:tc:entity:xmlns:xml:catalog"> <public publicId="-//OASIS//DTD DocBook XML V4.3//EN" uri="file:///c:/xml/docbook/4.3/dtd/docbookx.dtd"/> </catalog>
Note that you can also map previous versions of the requested DTD onto the newer version by mapping the old IDs to the new files.
Pointing at the catalog
Under Linux, xsltproc looks for a catalog in the default location of /etc/xml/catalog (or something similar). No alternative default is offered on Windows. Therefore, to point xsltproc at your catalog you must set the XML_CATALOG_FILES environment variable. This allows a space-separated list of filenames to be used.
From the command prompt:
you can also set this through the system properties control panel application. Once this is set, xsltproc will load your catalog file and use it to resolve the DTDs.
If you think this isn't working properly, you can view debug information relating to the use of the catalog by defining an environment variable like this:
You will now see lots more information about resolution when running xsltproc.
This post gives a bit of information about how to get the environment set up. I'll hopefully have time to write a bit about using all these tools as well in another pose.
1. Windows ports of xsltproc and required libraries: http://www.zlatkovic.com/libxml.en.html
1. Docbook Xml DTDs: http://www.docbook.org/xml/index.html
1. DocBook XSL Stylesheets: http://docbook.sourceforge.net/projects/xsl/
1. FOP: http://xml.apache.org/fop/
1. JAI: http://java.sun.com/products/java-media/jai/
It's clearly the day for non-software toys.
The SoundBridge from Roku is a network music player that plays from iTunes - it doesn't need any custom software of its own. It can work over wired or wireless networks and has a nice display:
The Roku can also be fed by the open source SlimServer music server. There are two models, one with a six inch display (the M1000, at around $250) and one with a massive twelve inch display that can display four lines of text (the M2000, around $500). Definitely cool toys.