Blog posts for tags/python

  1. psutil 5.5.0 is twice as fast

    OK, this is a big one. Starting from psutil 5.0.0 you can query multiple Process information around twice as fast than with previous versions (see original ticket and updated doc). It took me 7 months, 108 commits and a massive refactoring of psutil internals (here is the big commit), and I can safely say this is one of the best improvements and long standing issues which have been addressed in a major psutil release. Here goes.

    The problem

    Except for some cases, the way different process information are retrieved varies depending on the OS. Sometimes it requires reading a file in /proc filesystem (Linux), some other times it requires using C (Windows, BSD, OSX, SunOS), but every time it's done differently. Psutil abstracts this complexity by providing a nice high-level interface so that you, say, call Process.name() without worrying about what happens behind the curtains or on what OS you're on.

    Internally, it is not rare that multiple process info (e.g. name(), ppid(), uids(), create_time()) may be fetched by using the same routine. For example, on Linux we read /proc/stat to get the process name, terminal, CPU times, creation time, status and parent PID, but only one value is returned and the others are discarded. On Linux the code below reads /proc/stat 6 times:

    >>> import psutil
    >>> p = psutil.Process()
    >>> p.name()
    >>> p.cpu_times()
    >>> p.create_time()
    >>> p.ppid()
    >>> p.status()
    >>> p.terminal()
    

    Another example is BSD. In order to get process name, memory, CPU times and other metrics, a single sysctl() call is necessary, but again, because of how psutil used to work so far that same sysctl() call is executed every time (see here, here, and so on), one information is returned (say name()) and the rest is discarded. Not anymore.

    Do it in one shot

    It appears clear how the approach described above is not efficient, also considering that applications similar to top, htop, ps or glances usually collect more than one info per-process. psutil 5.0.0 introduces a new oneshot() context manager. When used, the internal routine is executed once (in the example below on name()) and the other values are cached. The subsequent calls sharing the same internal routine (read /proc/stat, call sysctl() or whatever) will return the cached value. With psutil 5.0.0 the code above can be rewritten like this, and on Linux it will run 2.4 times faster:

    >>> import psutil
    >>> p = psutil.Process()
    >>> with p.oneshot():
    ...     p.name()
    ...     p.cpu_times()
    ...     p.create_time()
    ...     p.ppid()
    ...     p.status()
    ...     p.terminal()
    

    Implementation

    One great thing about psutil design is its abstraction. It is dived in 3 "layers". The first layer is represented by the main Process class (python), which is what dictates the end-user high-level API. The second layer is the OS-specific Python module which is thin wrapper on top of the OS-specific C extension module (third layer). Because this was organized this way (modularly) the refactoring was reasonably smooth. In order to do this I first refactored those C functions collecting multiple info and grouped them in a single function (e.g. see BSD implementation). Then I wrote a decorator which enables the cache only when requested (when entering the context manager) and decorated the "grouped functions" with with it. The whole thing is enabled on request by the highest-level oneshot() context manager, which is the only thing which is exposed to the end user. Here's the decorator:

    def memoize_when_activated(fun):
        """A memoize decorator which is disabled by default. It can be
        activated and deactivated on request.
        """
        @functools.wraps(fun)
        def wrapper(self):
            if not wrapper.cache_activated:
                return fun(self)
            else:
                try:
                    ret = cache[fun]
                except KeyError:
                    ret = cache[fun] = fun(self)
                return ret
    
        def cache_activate():
            """Activate cache."""
            wrapper.cache_activated = True
    
        def cache_deactivate():
            """Deactivate and clear cache."""
            wrapper.cache_activated = False
            cache.clear()
    
        cache = {}
        wrapper.cache_activated = False
        wrapper.cache_activate = cache_activate
        wrapper.cache_deactivate = cache_deactivate
        return wrapper
    

    In order to measure the various speedups I finally wrote a benchmark script (well, two actually) and kept tuning until I was sure the various changes made psutil actually faster. The benchmark scripts calculate the speedup you can get if you call all the "grouped" methods together (best case scenario).

    Linux: +2.56x speedup

    Linux process is the only pure-python implementation as (almost) all process info are gathered by reading files in the /proc filesystem. /proc files typically contain different information about the process and /proc/PID/stat and /proc/PID/status are the perfect examples. That's why on Linux we aggregate them in 3 groups. The relevant part of the Linux implementation can be seen here.

    Windows: from +1.9x to +6.5x speedup

    Windows is an interesting one. In normal circumstances, if we're querying a process owned by our user, we group together only process' num_threads(), num_ctx_switches() and num_handles(), getting a +1.9x speedup if we access those methods in one shot. Windows is particular though, because certain methods use a dual implementation: a "fast method" is attempted first, but if the process is owned by another user it fails with AccessDenied. In that case psutil falls back on using a second "slower" method (see here for example). The second method is slower because it iterates over all PIDs but differently than "plain" Windows APIs it can be used to get multiple info in one shot: num threads, context switches, handles, CPU times, create time and IO counters. That is why querying processes owned by other users results in an impressive +6.5 speedup.

    OSX: +1.92x speedup

    On OSX we can get 2 groups of information. With sysctl() syscall we get process parent PID, uids, gids, terminal, create time, name. With proc_info() syscall we get CPU times (for PIDs owned by another user) memory metrics and ctx switches. Not bad.

    BSD: +2.18x speedup

    BSD was an interesting one as we gather a tons of process info just by calling sysctl() (see implementation). In a single shot we get process name, ppid, status, uids, gids, IO counters, CPU and create times, terminal and ctx switches.

    SunOS: +1.37 speedup

    SunOS implementation is similar to Linux implementation in that it reads files in /proc filesystem but differently from Linux this is done in C. Also in this case, we can group different metrics together (see here and here).

  2. psutil 4.4.0: improved Linux memory metrics

    OK, here's another psutil release. Main highlights of this release are more accurate memory metrics on Linux and different OSX fixes. Here goes.

    Linux virtual memory

    This new release sets a milestone regarding virtual_memory() metrics on Linux which are now calculated way more precisely (see commit). Across the years different people complained that the numbers reported by virtual_memory() were not accurate or did not match the ones reported by free command line utility exactly (see #862, #685, #538). As such I investigated how "available memory" is calculated on Linux and indeed psutil were doing it wrong. It turns out "free" cmdline itself, and many other similar tools, also did it wrong up until 2 years ago when somebody finally decided to accurately calculate the available system memory straight into the Linux kernel and expose this info to user-level applications. Starting from Linux kernel 3.14, a new "MemAvailable" column was added to /proc/meminfo and this is how psutil now determines available memory. Because of this both "available" and "used" memory fields returned by virtual_memory() precisely match free command line utility. As for older kernels (< 3.14), psutil tries to determine this value by using the same algorithm which was used in the original Linux kernel commit. Free cmdline utility source code also inspired an additional fix which prevents available memory overflowing total memory on LCX containers.

    OSX fixes

    For many years the OSX development of psutil occurred on a very old OSX 10.5 version, which I emulated via VirtualBox. The OS itself was a hacked version of OSX, called iDeneb. After many years I finally managed to get access to a more recent version of OSX (El Captain) thanks to VirtualBox + Vagrant. With this I finally had the chance to address many long standing OSX bugs. Here's the list:

    • 514: fix Process.memory_maps() segfault (critical!).
    • 783: Process.status() may erroneously return "running" for zombie processes.
    • 908: different process methods could erroneously mask the real error for high-privileged PIDs and raise NoSuchProcess and AccessDenied instead of OSError and RuntimeError.
    • 909: Process.open_files() and Process.connections() methods may raise OSError with no exception set if process is gone.
    • 916: fix many compilation warnings.

    Improved procinfo.py script

    procinfo.py is a script which shows psutil capabilities regarding obtaining different info about processes. I improved it so that now it reports a lot more info. Here's a sample output:

    $ python scripts/procinfo.py
    pid           4600
    name          chrome
    parent        4554 (bash)
    exe           /opt/google/chrome/chrome
    cwd           /home/giampaolo
    cmdline       /opt/google/chrome/chrome
    started       2016-09-19 11:12
    cpu-tspent    27:27.68
    cpu-times     user=8914.32, system=3530.59,
                  children_user=1.46, children_system=1.31
    cpu-affinity  [0, 1, 2, 3, 4, 5, 6, 7]
    memory        rss=520.5M, vms=1.9G, shared=132.6M, text=95.0M, lib=0B,
                  data=816.5M, dirty=0B
    memory %      3.26
    user          giampaolo
    uids          real=1000, effective=1000, saved=1000
    uids          real=1000, effective=1000, saved=1000
    terminal      /dev/pts/2
    status        sleeping
    nice          0
    ionice        class=IOPriority.IOPRIO_CLASS_NONE, value=0
    num-threads   47
    num-fds       379
    I/O           read_count=96.6M, write_count=80.7M,
                  read_bytes=293.2M, write_bytes=24.5G
    ctx-switches  voluntary=30426463, involuntary=460108
    children      PID    NAME
                  4605   cat
                  4606   cat
                  4609   chrome
                  4669   chrome
    open-files    PATH
                  /opt/google/chrome/icudtl.dat
                  /opt/google/chrome/snapshot_blob.bin
                  /opt/google/chrome/natives_blob.bin
                  /opt/google/chrome/chrome_100_percent.pak
                  [...]
    connections   PROTO LOCAL ADDR            REMOTE ADDR               STATUS
                  UDP   10.0.0.3:3693         *:*                       NONE
                  TCP   10.0.0.3:55102        172.217.22.14:443         ESTABLISHED
                  UDP   10.0.0.3:35172        *:*                       NONE
                  TCP   10.0.0.3:32922        172.217.16.163:443        ESTABLISHED
                  UDP   :::5353               *:*                       NONE
                  UDP   10.0.0.3:59925        *:*                       NONE
    threads       TID              USER          SYSTEM
                  11795             0.7            1.35
                  11796            0.68            1.37
                  15887            0.74            0.03
                  19055            0.77            0.01
                  [...]
                  total=47
    res-limits    RLIMIT                     SOFT       HARD
                  virtualmem             infinity   infinity
                  coredumpsize                  0   infinity
                  cputime                infinity   infinity
                  datasize               infinity   infinity
                  filesize               infinity   infinity
                  locks                  infinity   infinity
                  memlock                   65536      65536
                  msgqueue                 819200     819200
                  nice                          0          0
                  openfiles                  8192      65536
                  maxprocesses              63304      63304
                  rss                    infinity   infinity
                  realtimeprio                  0          0
                  rtimesched             infinity   infinity
                  sigspending               63304      63304
                  stack                   8388608   infinity
    mem-maps      RSS      PATH
                  381.4M   [anon]
                  62.8M    /opt/google/chrome/chrome
                  15.8M    /home/giampaolo/.config/google-chrome/Default/History
                  6.6M     /home/giampaolo/.config/google-chrome/Default/Favicons
                  [...]
    

    NIC netmask on Windows

    net_if_addrs() on Windows is now able to return the netmask.

    Other improvements and bug fixes

    Just take a look at the HISTORY file.

  3. psutil 4.2.0: Windows services in Python

    New psutil 4.2.0 is out. The main feature of this release is the support for Windows services:

    >>> import psutil
    >>> list(psutil.win_service_iter())
    [<WindowsService(name='AeLookupSvc', display_name='Application Experience') at 38850096>,
     <WindowsService(name='ALG', display_name='Application Layer Gateway Service') at 38850128>,
     <WindowsService(name='APNMCP', display_name='Ask Update Service') at 38850160>,
     <WindowsService(name='AppIDSvc', display_name='Application Identity') at 38850192>,
     ...]
    >>> s = psutil.win_service_get('alg')
    >>> s.as_dict()
    {'binpath': 'C:\\Windows\\System32\\alg.exe',
     'description': 'Provides support for 3rd party protocol plug-ins for Internet Connection Sharing',
     'display_name': 'Application Layer Gateway Service',
     'name': 'alg',
     'pid': None,
     'start_type': 'manual',
     'status': 'stopped',
     'username': 'NT AUTHORITY\\LocalService'}
    

    I did this mainly because I find pywin32 APIs too low level. Having something like this in psutil can be useful to discover and monitor services more easily. The code changes are here and here's the doc. The API for querying a service is similar to psutil.Process. You can get a reference to a service object by using its name (which is unique for every service) and then use name(), status(), etc..:

    >>> s = psutil.win_service_get('alg')
    >>> s.name()
    'alg'
    >>> s.status()
    'stopped'
    

    Initially I thought to expose and provide a complete set of APIs to handle all aspects of service handling including start(), stop(), restart(), install(), uninstall() and modify() but I soon realized that I would have ended up reimplementing what pywin32 already provides at the cost of overcrowding psutil API (see my reasoning here). I think psutil should really be about monitoring, not about installing and modifying system stuff, especially something as critical as a Windows service.

    Considerations about Windows services

    For those of you who are not familiar with Windows, a service is something, generally an executable (.exe), which runs at system startup and keeps running in background. We can say they are the equivalent of a UNIX init script. All service are controlled by a "manager" which keeps track of their status and metadata (e.g. description, startup type) and with that you can start and stop them. It is interesting to note that since (most) services are bound to an executable (and hence a process) you can reference the process via its PID:

    >>> s = psutil.win_service_get('sshd')
    >>> s
    <WindowsService(name='sshd', display_name='Open SSH server') at 38853046>
    >>> s.pid()
    1865
    >>> p = psutil.Process(1865)
    >>> p
    <psutil.Process(pid=19547, name='sshd.exe') at 140461487781328>
    >>> p.exe()
    'C:\CygWin\bin\sshd'
    

    Other improvements

    psutil 4.2.0 comes with 2 other enhancements for Linux:

    • psutil.virtual_memory() returns a new "shared" memory field. This is the same value reported by free cmdline utility.
    • I changed the way how /proc was parsed. Instead of reading /proc/{pid}/status line by line I used a regular expression. Here's the speedups:
      • Process.ppid() is 20% faster
      • Process.status() is 28% faster
      • Process.name() is 25% faster
      • Process.num_threads() is 20% faster (on Python 3 only; on Python 2 it's a bit slower; I suppose re module received some improvements)
  4. psutil NetBSD support

    Roughly two months have passed since I last announced psutil added support for OpenBSD platforms. Today I am happy to announce we also have NetBSD support! This was contributed by Thomas Klausner, Ryo Onodera and myself in PR #570.

    Differences with FreeBSD (and OpenBSD)

    NetBSD implementation has similar limitations as the ones I encountered with OpenBSD. Again, FreeBSD presents itself as the BSD variant with the best support in terms of kernel functionalities.

    • Process.memory_maps() is not implemented. The kernel provides the necessary pieces but I didn't do this yet (hopefully later).
    • Process.num_ctx_switches()'s involuntary field is always 0. kinfo_proc syscall provides this info but it is always set to 0.
    • Process.cpu_affinity() (get and set) is not supported.
    • psutil.cpu_count(logical=False) always return None.

    As for the rest: it is all there. All memory, disk, network and process APIs are fully supported and functioning.

    Other enhancements available in this psutil release

    Other than NetBSD support this new release has a couple of interesting enhancements:

    • #708: [Linux] psutil.net_connections() and Process.connections() on Python can be up to 3x faster in case of many connections.
    • #718: process_iter() is now thread safe.

    You can read the rest in the HISTORY file, as usual.

    Move to Prague

    As a personal note I'd like to add that I'm currently in Prague (Czech Republic) and I'm thinking about moving down here for a while. The city is great and girls are beautiful. ;-)

    External discussions

  5. Real process memory and environ in Python

    New psutil 4.0.0 is out, with some interesting news about process memory metrics. I'll just get straight to the point and describe what's new.

    "Real" process memory info

    Determining how much memory a process really uses is not an easy matter (see this and this). RSS (Resident Set Size), which is what most people usually rely on, is misleading because it includes both the memory which is unique to the process and the memory shared with other processes. What would be more interesting in terms of profiling is the memory which would be freed if the process was terminated right now. In the Linux world this is called USS (Unique Set Size), and this is the major feature which was introduced in psutil 4.0.0 (not only for Linux but also for Windows and OSX).

    USS memory

    The USS (Unique Set Size) is the memory which is unique to a process and which would be freed if the process was terminated right now. On Linux this can be determined by parsing all the "private" blocks in /proc/pid/smaps. The Firefox team pushed this further and managed to do the same also on OSX and Windows, which is great. New version of psutil is now able to do the same:

    >>> psutil.Process().memory_full_info()
    pfullmem(rss=101990, vms=521888, shared=38804, text=28200, lib=0, data=59672, dirty=0, uss=81623, pss=91788, swap=0)
    

    PSS and swap

    On Linux there are two additional metrics which can also be determined via /proc/pid/smaps: PSS and swap. PSS, aka "Proportional Set Size", represents the amount of memory shared with other processes, accounted in a way that the amount is divided evenly between the processes that share it. I.e. if a process has 10 MBs all to itself (USS) and 10 MBs shared with another process, its PSS will be 15 MBs. "swap" is simply the amount of memory that has been swapped out to disk. With memory_full_info() it is possible to implement a tool like this, similar to smem on Linux, which provides a list of processes sorted by "USS". It is interesting to notice how RSS differs from USS:

    ~/svn/psutil$ ./scripts/procsmem.py
    PID     User    Cmdline                            USS     PSS    Swap     RSS
    ==============================================================================
    ...
    3986    giampao /usr/bin/python3 /usr/bin/indi   15.3M   16.6M      0B   25.6M
    3906    giampao /usr/lib/ibus/ibus-ui-gtk3       17.6M   18.1M      0B   26.7M
    3991    giampao python /usr/bin/hp-systray -x    19.0M   23.3M      0B   40.7M
    3830    giampao /usr/bin/ibus-daemon --daemoni   19.0M   19.0M      0B   21.4M
    20529   giampao /opt/sublime_text/plugin_host    19.9M   20.1M      0B   22.0M
    3990    giampao nautilus -n                      20.6M   29.9M      0B   50.2M
    3898    giampao /usr/lib/unity/unity-panel-ser   27.1M   27.9M      0B   37.7M
    4176    giampao /usr/lib/evolution/evolution-c   35.7M   36.2M      0B   41.5M
    20712   giampao /usr/bin/python -B /home/giamp   45.6M   45.9M      0B   49.4M
    3880    giampao /usr/lib/x86_64-linux-gnu/hud/   51.6M   52.7M      0B   61.3M
    20513   giampao /opt/sublime_text/sublime_text   65.8M   73.0M      0B   87.9M
    3976    giampao compiz                          115.0M  117.0M      0B  130.9M
    32486   giampao skype                           145.1M  147.5M      0B  149.6M
    

    Implementation

    In order to get these values (USS, PSS and swap) we need to pass through the whole process address space. This usually requires higher user privileges and is considerably slower than getting the "usual" memory metrics via Process.memory_info(), which is probably the reason why tools like ps and top show RSS/VMS instead of USS. A big thanks goes to the Mozilla team which figured out all this stuff on Windows and OSX, and to Eric Rahm who put the PRs for psutil together (see #744, #745 and #746). For those of you who don't use Python and would like to port the code on other languages here's the interesting parts:

    Memory type percent

    After reorganizing process memory APIs I decided to add a new memtype parameter to Process.memory_percent(). With this it is now possible to compare a specific memory type (not only RSS) against the total physical memory. E.g.

    >>> psutil.Process().memory_percent(memtype='pss')
    0.06877466326787016
    

    Process environ

    Second biggest improvement in psutil 4.0.0 is the ability to determine the process environment variables. This opens interesting possibilities about process recognition and monitoring techniques. For instance, one might start a process by passing a certain custom environment variable, then iterate over all processes to find the one of interest (and figure out whether it's running or whatever):

    import psutil
    for p in psutil.process_iter():
        try:
            env = p.environ()
        except psutil.Error:
            pass
        else:
            if 'MYAPP' in env:
                ...
    

    Process environ was a long standing issue (year 2009) who I gave up to implement because the Windows implementation worked for the current process only. Frank Benkstein solved that and the process environ can now be determined on Linux, Windows and OSX for all processes (of course you may still bump into AccessDenied for processes owned by another user):

    >>> import psutil
    >>> from pprint import pprint as pp
    >>> pp(psutil.Process().environ())
    {...
     'CLUTTER_IM_MODULE': 'xim',
     'COLORTERM': 'gnome-terminal',
     'COMPIZ_BIN_PATH': '/usr/bin/',
     'HOME': '/home/giampaolo',
     'PWD': '/home/giampaolo/svn/psutil',
      }
    >>>
    

    It must be noted that the resulting dict usually does not reflect changes made after the process started (e.g. os.environ['MYAPP'] = '1'). Again, for whoever is interested in doing this in other languages, here's the interesting parts:

    Extended disk IO stats

    psutil.disk_io_counters() has been extended to report additional metrics on Linux and FreeBSD:

    • busy_time, which is the time spent doing actual I/Os (in milliseconds).
    • read_merged_count and write_merged_count (Linux only), which is number of merged reads and writes (see iostats doc).

    With these new metrics it is possible to have a better representation of actual disk utilization, similarly to iostat command on Linux.

    OS constants

    Given the increasing number of platform-specific metrics I added a new set of constants to quickly differentiate what platform you're on: psutil.LINUX, psutil.WINDOWS, etc. Main bug fixes:

    • #734: on Python 3 invalid UTF-8 data was not correctly handled for proces name(), cwd(), exe(), cmdline() and open_files() methods, resulting in UnicodeDecodeError. This was affecting all platforms. Now surrogateescape error handler is used as a workaround for replacing the corrupted data.
    • #761: [Windows] psutil.boot_time() no longer wraps to 0 after 49 days.
    • #767: [Linux] disk_io_counters() may raise ValueError on 2.6 kernels and it's broken on 2.4 kernels.
    • #764: psutil can now be compiled on NetBSD-6.X.
    • #704: psutil can now be compiled on Solaris sparc.

    Complete list of bug fixes is available here.

    Porting code

    Being 4.0.0 a major version, I took the chance to (lightly) change / break some APIs.

    • Process.memory_info() no longer returns just an (rss, vms) namedtuple. Instead it returns a namedtuple of variable length, changing depending on the platform (rss and vms are always present though, also on Windows). Basically it returns the same result of old memory_info_ex(). This shouldn't break your existent code, unless you were doing rss, vms = p.memory_info().
    • At the same time process_memory_info_ex() is now deprecated. The method is still there as an alias for memory_info(), issuing a deprecation warning.
    • psutil.disk_io_counters() returns 2 additional fields on Linux and 1 additional field on FreeBSD.
    • psutil.disk_io_counters() on NetBSD and OpenBSD no longer return write_count and read_count metrics because the kernel do not provide them (we were returning the busy time instead). I also don't expect this to be a big issue because NetBSD and OpenBSD support is very recent.

    Final notes and looking for a job

    OK, this is it. I would like to spend a couple more words to announce the world that I'm currently unemployed and looking for a remote gig again! =) I want remote because my plan for this year is to remain in Prague (Czech Republic) and possibly spend 2-3 months in Asia. If you know about any company who's looking for a Python backend dev who can work from afar feel free to share my Linkedin profile or mail me at g.rodola [at] gmail [dot] com.

Social

Feeds