Unimore PLE/Docs/Tools/CoDeploy

From Web
Jump to navigation Jump to search

<accesscontrol>luser</accesscontrol>

CoDeploy needs an HTTP server, such as Apache2, running on the local host containing the software to be deployed. So, if you are behind a firewall, you need to have the port 80 of the local host "open" so that remote machines can access contents by mean of an HTTP connection, as if they are downloading a file from a Web site.

You have to setup the HTTP server so that, when you give an URL pointing to a directory on your local host, then this can be find by remote nodes. On Ubuntu, you have to modify the file /etc/apache2/sites-available/default so that the "DocumentRoot" parameter, which by default is /var/www/, points e.g. to your PlanetLab directory.
So, if you have: DocumentRoot /var/www/
then replace with DocumentRoot /home/yourUserName/choosenFolder/
This should work just fine if your Linux box is being used by a single user. After that, you should be able to view a sample index.html file, that you save into "choosenFolder", with a Web browser by connecting to the IP of your local host. Be sure that you can view it when connecting from outside your LAN: if you can't, then the firewall is blocking your port 80.

The syntax of the CoDeploy command is something like this: codeploy -a localDirectory http://localHostIP_orName/localDirectory/ remoteDirectory.
With -a you specify you want to send compressed data, useful if deploying large files for the first time; localDirectory is the directory containing the files you want to be deployed and MUST be inside the "choosenFolder" with which you configured the DocumentRoot; the URL is where remote nodes will find the files and remoteDirectory is the remote directory where the files will be copied. Before issuing the command, you can check if the files are in the correct folder by opening with a Web browser the page http://localHostIP_orName/localDirectory/, it should display the list of files that you want to transfer.

CoDeploy then generates a script that is copied to the remote nodes and then executed, for downloading the files from hosts of a CDN (CoDeen) or from the local host. It seems to be a quite slow method compared to MultiCopy, but maybe it depends on the volume of data to be transferred.

work in progress