out of memory issue in installing packages on Ubuntu server

Each Answer to this Q is separated by one/two green lines.

I am using a Ubuntu cloud server with limited 512MB RAM and 20 GB HDD. Its 450MB+ RAM is already used by processes.

I need to install a new package called lxml which gets complied using Cpython while installation and its a very heavy process so it always exits with error gcc: internal compiler error: Killed (program cc1) which is due to no RAM available for it to run.

Upgrading the machine is a choice but it has its own issues and few of my services/websites live from this server itself.

But on my local machine lxml is already installed properly. And since my need is lxml only, so is it possible that pick all useful files from local machine’s directory and copy then into remote machine?

Will it work that way? If yes, how to pick-up all files for a package?


Extend your RAM by adding a swap file:

a swap file is a file stored on the computer hard drive that is used
as a temporary location to store information that is not currently
being used by the computer RAM. By using a swap file a computer has
the ability to use more memory than what is physically installed in
the computer

In Short:

  1. Login as root: su - or execute the commands with sudo in front
  2. dd if=/dev/zero of=/swapfile1 bs=1024 count=524288
  3. mkswap /swapfile1
  4. chown root:root /swapfile1
  5. chmod 0600 /swapfile1
  6. swapon /swapfile1

Now the swap file will be activated temporarily, but will be gone after reboot.
You should have enough RAM for your installing process

To Remove the File:

  1. swapoff -v /swapfile1
  2. rm /swapfile1

The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 .