Monday, 5 August 2013

LibreOffice remote for Android

I'm a self-declared champion of free software and I use it whenever I can. My office suite of choice is LibreOffice and it does everything I need. Mostly, this is just presentations, so I was excited to discover that newer versions (4.0.3+, I believe) support the use of your Android phone as a remote control for your Impress presentations. A rough guide is on The Document Foundation's site but here's what I did to get it working on two Ubuntu 12.04 laptops.

Bluetooth

First, you need to establish a Bluetooth connection between your laptop and phone. I had to check if my laptop(s) actually had Bluetooth. It turns out they don't but I have a basic Bluetooth dongle (seriously, who chose that word?) that I used on my old desktop for years. When I plugged it in, Linux detected it straight away and the Bluetooth icon appeared in my system tray. From there, I activated Bluetooth on my phone and made it discoverable, before using the computer to establish a connection. The computer provided a PIN number that, when prompted, I entered into the phone.

Now, the process wasn't seamless. I tried several times and needed to "forget" the computer at each new try. But eventually it got through. The only nagging issue was that both computers turned up as ubuntu-0, which totally confused the phone for a while.

LibreOffice

Meanwhile, you need to enable the remote control in Impress. Dead easy. Open Impress, select Options at the bottom of the Tools menu. Then, expand the Impress tab and select General. There you should see the option "enable remote control", which you should check. Then you need to fully restart LibreOffice by closing everything. Basically, when you start LibreOffice again, you must see the splash screen. Now you can open your presentation in Impress.

Android

On your phone, you obviously need to download the remote control app. With that done and the Bluetooth connection established, open your presentation you should find the computer listed on the screen. If you select it, it'll say "no presentation running" and give you the option to start it, or cut straight to the presentation.

There are a few things I didn't find obvious, so I'll share them here, now. First, the up and down volume control correspond to the right and left arrow keys on the keyboard. If you have animations in your presentation, they're the only way I know to make them work. Second, you can press the clock to choose between the time, a stopwatch and a countdown timer.

The app is quite nifty but I'm not sure how useful it is on a small screen like my Nexus One. I'll probably end up sticking to presenter mode but it might be a totally different story on a tablet.

Friday, 26 July 2013

Installing Python modules without root privileges

It seems I'm doomed to work on centrally-administered Linux systems without root privileges. I've already had to write about similar problems before. I'm on a much more up-to-date distribution (link OpenSUSE) these days, so LaTeX and out-of-date software are generally no longer problems, but I often find myself wanting to install one or another nifty Python package for my work. Luckily, it turns out someone out there is thinking of people like me. The standard distribution mechanism of Python packages includes a method to install a package in your user space. The official description is here but here are the basics.

All the packages in the Python Package Index are installed with a setup script, always setup.py. Usually, you could install these packages by downloading them, extracting them, then changing to the new directory and typing

sudo python setup.py install

These now include the option to install to your user space by instead entering

python setup.py install --user

By default, this creates a root-like tree under ~/.local/. With that done, you're basically good to go! Path variables seem to be updated and you can import them in Python, IPython or scripts. It's always handy to test by entering

python -c "import python-package"

at the command line and watching for errors.

All that said, there's a known conflict with the default settings in OpenSUSE and RedHat. (I'm not sure if it applies to other RPM-based distributions.) This can be fixed by adding the argument --prefix== to our command. So the full command becomes

python setup.py install --user --prefix==

That's it! As a closing note, this method also carries the handy benefit of following your around as you login to different computers, presuming they have the same architectures and Python packages. This happens to be pretty handy when either offloading work onto other computers in a network, or using a cluster. Just be sure to set up the appropriate environment variables if they aren't automatically loaded.

Monday, 15 July 2013

Transfer files to and from Android phone wirelessly with Samba

I regard as very noughties the use a cable to transfer files to and from your phone. Wireless is now definitely the way. Most of the time, you probably already do this by virtue of tools like Winamp's music syncing or Dropbox's automatic photo upload but sometimes a guy just wants to, you know, copy some files. Fortunately, for Android, there's a great means to this end: Samba Filesharing for Android. In short, it turns your phone into a Windows-compatible file server. But, since Samba is an open platform and implemented on many other operating systems, it's easily accessible on Linux too.

All you need to do is download the app, set up the basic requirements (e.g. username and password) and start the service. Your phone should then appear as a network drive. In Windows, this involves selecting ANDROID (or whatever you named the device) from the Network tree in a file manager. On selecting the phone, you'll be prompted for the credentials you set up. You're then free to transfer files to your heart's content!

In Ubuntu, you'll find the network drive in the file manager under Browse network > Workgroup. On opening the device, you'll be prompted for the login credentials. Then you should be good to go. If you get an error message like "Failed to retried share list from server" (as I did), you probably need to install Samba. You can do so through your favourite package manager. The package name is (surprisingly enough) samba so sudo apt-get install samba will do.

In theory, it's also possible to mount the Samba device on the Linux filesystem but I haven't gotten that to work. There's a utility called smbfs, which is designed to do just this, and you can find some information here but I only ever get a "permission denied" error. I'd be interested to hear from anyone who has used this system successfully.

Saturday, 29 June 2013

Goodbye Google Reader; Hello Feedly

Exit Google Reader

It's no news that Google Reader is being shut down. Like many Reader-users (Readers? Rusers? Hmmm...), I've left it close to the last possible minute to find a replacement. But I've done so and here's a summary of my choice.

In defence of those who've put off the move, part of the problem was, initially, the lack of really Reader-like RSS readers. So since the announcement, many have added features and functionality to better mimic our departing favourite. Which is great, because we like Google Reader, and all we really want is something just like it.

Enter Feedly

I've settled on Feedly. To be honest, I haven't done much experimentation with other services but Feedly has been updating to closely fit my Reader fill. Firstly, I wanted access to my RSS feeds on a webpage. (Feedly only fulfilled this quite recently.) This rules out readers that install local clients or extensions. Secondly, I wanted an Android app. This ruled out iOS-exlcusives like Reeder. And finally, I just wanted something that looked like reader, so no Pulse. That is, one article per line. I don't want images all over my feeds. Not until I open an article anyway.

Hence Feedly. It can be rendered almost into an almost-perfect replica of Google Reader. In fact, the only UI difference I've found so far is that pressing n moves to the next article in the list and marks it read, which Google Reader didn't. But that's okay. I can get used to that.

The transition wasn't perfectly seamless, either, but it may have been my own doing. I was using Feedly on my phone to read from my Google Reader account. When I transferred the feeds on my PC, I suspect I opened a Feedly account and thus the two devices weren't, for a few days, actually reading the same feeds. I think I've fixed this but watch out for your own growing pains. Even if it is a bit rough, you don't have much choice, since Google Reader is cutting us off anyway.

RSS is still useful

Finally, while we're here, I'd like to point out to anyone asking (is anyone asking?) that I still find RSS very useful. While New Scientist and Scientific American's current stories can tumble down a Twitter or Google+ feed without my having to worry that I've missed some critical information, there are many things for which I like to be sure I've seen all the content releases. For my own enjoyment, this includes, say, webcomics. I like to read every xkcd, whether I'm coming into the office as usual or computer-less on the Baltic Sea for a week. More notably, however, are journal articles. Want to make sure you miss nothing of your favourite scientific journals (or the arXiv preprints)? Subscribe to the RSS feed! That way, the article only goes away when you move past it's title in your feed.

Have you moved on to a new RSS reader? Feedly or some other? Or have you finally abandoned RSS entirely? Let me know.

Tuesday, 4 June 2013

Parallel iPython

For a few months now, I've been using IPython to do a heavy but embarrassingly parallel calculation. I finally decided to work out how to use IPython's parallel computing mechanisms to do the job several times faster. Here's a summary of my routine to make the parallel calculation. Most of this can be found in the IPython documentation but I'll mention a few extra points I noted.

Starting the IPython cluster

To do parallel calculations, IPython needs to run a number of engines, which it calls on from the interface to do the heavy lifting. These are started with

ipcluster start -n 4

where here, for example, 4 engines will be started. My quad-core processor seems to be multithreaded, so the OS actually thinks I have 8 cores. I usually run 6.

This command must be executed in parallel to IPython. You can, for example, run it in a different terminal or send it to the background of the same terminal, either by appending & to the command or by pressing Ctrl-Z and then typing bg. I tend to run it in a separate terminal tab and send it to the background.

When the time comes to stop the engines, you can either bring the ipcluster job to the foreground and abort (Ctrl-C) or type

ipcluster stop

Initializing the clients in IPython

Now, in your instance of IPython, you need to import IPython's parallel client module.

from IPython.parallel import Client

Then, we can assign a client object that will have access to the engines that you started with the ipcluster command.

c = Client()

We aren't quite ready to start calculating. From the documentation,
The two primary models for interacting with engines are:
  • A Direct interface, where engines are addressed explicitly.
  • A LoadBalanced interface, where the Scheduler is trusted with assigning work to appropriate engines.
I use the LoadBalanced interface because it decides on the most efficient way to assign work to the engines. The interface objects provides a new map function, which works like the intrinsic map function but invokes the engines, in parallel. To create the interface, type

lbv = c.load_balanced_view()

I'm not quite sure why, but we also need this command.

lbv.block = True

At this point, you could start calculating, if you have work that doesn't depend on having any data or any of your own functions. For example, try

lbv.map(lambda x:x**10, range(32))

In reality (or, at least, my reality), I need to make calculations that involve my own functions and data and there's a bit more to do to make all that work.

Preparing the clients

I think of the clients as new IPython instances that haven't issued any import commands or defined any variables or anything. So I need to make those imports and define those variables.

There are two ways to import packages. The first, which I use, boils down to telling the engines to issue the import command themselves. For example, to import NumPy,

c[:].execute('import numpy')

Alternatively, you can enter

with c[:].sync_imports():
    import numpy

I'm not aware of either method being preferred.

To define variables, we could use the execute function above. But that might get painful for complicated expressions like list comprehensions. Much better is to assign the variable directly in the dictionary of global variables. For a variable my_var defined in the local IPython instace, enter

c[:]['my_var'] = my_var

Calling your own functions

My work originally used a function with a call signature something like

output = my_fun(var1,var2,var3,list_of_arrays1,list_of_arrays2,list_of_arrays3,constant)

I couldn't figure out how to make this play nice with the map command, so I re-organized the function in two ways. First, I pre-processed my data in such a way that the last constant was no longer necessary. I was lucky that this was very easy. (In fact, I should've done it before because it removed a list comprehension from the innermost loop.) Second, I combined the lists with zip and had the function unpack once called. So I then had a call signature

output = my_package.my_fun(var1,var2,var3,zipped_up_arrays)

Finally, I invoked the parallel calculation with

output = 
lbv.map(lambda x: my_package.my_fun(var1,var2,var3,x),zip(list_of_arrays1,list_of_arrays2,list_of_arrays3)

Et voila! My calculation was done vastly faster.

The only problem...

...is that there seems to be a memory leak somewhere in ipcluster or the engines themselves. The result is that I kill the engines once in a while and re-initialize the client and interface objects before I run out of memory. Apparently this is a known problem that can be circumvented by manually clearing the client and interface objects' cache

view.results.clear()
client.results.clear()
client.metadata.clear()

but I generally haven't found that this helps at all.

Have you used IPython's parallel routines? See something silly I'm doing? Let me know in the comments!