Fuse for the Idle Fellow – Error opening zip file when starting a container

When is an unresolvable artifact not an unresolvable artifact? Fuse’s “Error while downloading artifacts” Red Triangle of Death can have some weird and wonderful underlying causes.

Maybe it’s just me, but of the manifold reasons for the still-birth of my JBoss Fuse containers, “error while downloading artifacts” has got to be by far the most common:-

Provision Exception:
io.fabric8.agent.utils.MultiException: Error while downloading artifacts
  at io.fabric8.agent.utils.AgentUtils$ArtifactDownloader.
      await(AgentUtils.java:314)
  at io.fabric8.agent.DeploymentBuilder.download(
      DeploymentBuilder.java:179)
  …

Nothing remotely helpful thus far – this particular stack trace is more a race to the bottom than Donald Trump’s Presidential campaign:-

…
java.io.IOException: Error downloading mvn:uk.co.national-lottery/
      winning-ticket-generator/1.0.0
  at io.fabric8.agent.download.AbstractDownloadTask.initIOException(
    AbstractDownloadTask.java:108)
  at io.fabric8.agent.download.AbstractDownloadTask.run(
    AbstractDownloadTask.java:88)
  …
Caused by: java.io.IOException: URL [mvn:uk.co.national-lottery/
      winning-ticket-generator/1.0.0] could not be resolved.
  at io.fabric8.agent.download.MavenDownloadTask.download(
    MavenDownloadTask.java:128)
  at io.fabric8.agent.download.AbstractDownloadTask.run( 
    AbstractDownloadTask.java:77)
  …

Scroll down to the underlying cause and more often than not – especially when there are only one or two cases of this message – the reasons is a genuinely unresolvable artifact. Perhaps a mis-keying of the identifiers, a version that hasn’t been released yet or, as in this case, hopeless optimism on my part.

But what if there are dozens of these exceptions and hundreds upon hundreds of lines of stack trace as in the following heavily abbreviated example?

Provision Exception:
io.fabric8.agent.utils.MultiException: Error while downloading artifacts
  …
java.util.zip.ZipException: error in opening zip file
  …
java.io.IOException: Error downloading mvn:io.fabric8/
    fabric-cxf/1.0.0.redhat-424
  …
java.io.IOException: Error downloading mvn:io.fabric8/
    fabric-zookeeper/1.0.0.redhat-424
  …
Caused by: java.io.IOException: URL [mvn:io.fabric8/
    fabric-zookeeper/1.0.0.redhat-424] could not be resolved.
  …
java.io.IOException: Error downloading mvn:org.apache.camel/
    camel-cxf/2.12.0.redhat-611431
  …
Caused by: java.io.IOException: URL [mvn:org.apache.activemq/
    activemq-osgi/5.9.0.redhat-611431] could not be resolved.
  at io.fabric8.agent.download.MavenDownloadTask.download(
    MavenDownloadTask.java:123)
  at io.fabric8.agent.download.AbstractDownloadTask.run( 
    AbstractDownloadTask.java:77)
  … 5 more

There are certainly a couple of meaningful reasons for this sort of thing happening – for example if the artifacts are related and held in one particular Maven repository, that repo could be off-line or perhaps not in the list of repos Fuse is looking at. If they span several external repos it could be a broken connection to the Internet.

But the above example is caused by neither of these and stands out for a couple of reasons. “Error opening Zip File” messages suggest a failure at a later stage in the resolution process and the failure to download bundles such as fabric-zookeeper or fabric-cxf which are part of the Fuse distribution itself suggests the dweebs running the corporate network aren’t to blame either.

Something less obvious is going wrong here.

Patch work

Containers in a Fuse Fabric have a lot of autonomy, a devolved model which has both strengths and weaknesses. Whilst it lends itself well to scalability and robustness it can, to say the least, be a little opaque at times when trying to track problems down and it can certainly throw up some unexpected concurrency issues too.

The lack of Fuse distros from Red Hat that incorporate the latest patches also means that many of us end up scripting the installation of required patches to build out new environments. And it’s here where I see the above artifact auto-wrecks; when patches are being rolled out while containers are trying to provision themselves.

A quick look in the data directory for the failed container indeed suggests that its download went a little awry:-

[root@localhost data]# find . -type f -empty
./fabric-agent/download-6648707680626457216.tmp
./fabric-agent/download-1588863625139046510.tmp
./fabric-agent/download-370426190498069379.tmp
./fabric-agent/download-4804085320291013834.tmp
./fabric-agent/download-1627848239434867514.tmp
./cache/bundle58/data/system.properties
./cache/bundle58/data/libs.properties
./cache/bundle58/data/extension.properties
./cache/bundle58/data/config.properties
./cache/bundle58/data/endorsed.properties
./cache/cache.lock

A quick dash to the Google-cave throws up a couple of JIRAs in this area but nothing that quite resolves the problem. Fortunately us idle fellows will happily settle for resolving the symptom instead and this one is particularly idle-fellow friendly. Downloaded bundles are stored in the container’s data directory under cache and maven/agent. Shutdown the container and clear these folders down:-

[root@localhost data]# rm -rf cache/*
[root@localhost data]# rm -rf maven/agent/*

Restart it, and all is well.

Leave a Reply

Your email address will not be published. Required fields are marked *