We released the Services features: you’re now able to run arbitrary headless programs on your account in sandboxed containers, without extra-permissions needs!
alwaysdata is an advanced Cloud Platform allowing its users to host as many websites and Web Apps as they want. The platform embeds a large range of features, from languages (PHP, Python, Node, Go, etc) to DevOps-oriented functionalities (SSH access, Scheduled tasks, etc). They all come without restrictions but permissions: you cannot run a program with
root privileges as we handle the overall platform system and stability1).
Letting you running your own custom programs in a headless mode, like daemonized processes, was an important need for many of our users. Here they are!
So we introduced Services for everyone, from Cloud to Catalyst users. They are custom programs running in a non-interactive mode, without any user’s action. Because the Platform already embeds anything you may need as dependencies for those programs (interpreters, libraries, etc), you won’t be restricted.
The most interesting part of services is the monitoring ability: because they’re designed to be run as a background process, the system observes the execution and restart it as soon as it dies, ensuring your process is always up and ready without any action from your side. Unlike the Sites processes, killed and restarted on-demand by our front-end proxy, the Services are available without interruption and monitored to stay up.
Our documentation was updated to add the Services section and help you declare them in your administration panel.
As in Scheduled Tasks, you do have to declare the command to run. This is the only mandatory parameter needed for a new Service to be ready. The command may or may not be already available on the platform. You’re free to add your own programs2) in your account user space.
By default, your Service runs in an isolated process3) and is not available from outside unless you explicitly expose it4). You may want to reach them externally for some use-cases. To do so, attach your running Service process to the
:: IPv6 address and pick a port available in the range
8300-8499. Thanks to our containerization architecture, the full range is accessible and reserved to your user, even for the Cloud Platform. Be careful: your process is publicly available this way, and you do have to enable authentication when needed.
In some cases, you may need to observe your Service execution to trigger its restarting process based on specific criteria rather than a simple process status.
You’re allowed to specify a custom Monitoring Command. The Services’ supervision process will use it to determine if the associated process needs to be restarted or not.
Using queue schedulers (and their associated brokers) in your programs is one of the top-10 use-case asked by our users. For Catalyst users, the feature is built-in as they run in a completely isolated environment, and the feature is available for a while now.
It’s more complicated for our Cloud users. Because they share hardware resources, we couldn’t simply run a process continuously for security reasons. Now, thanks to Services, every user can run its own instance as a Service and access it on a dedicated port belonging to the running user only! Smart and simple.
Because our Cloud platform is particularly loved by Pythonistas, you’ll be pleased to learn that running a Celery process (associated to a RabbitMQ, already available, or a Redis in its own instance) became as simple as declaring a Site ! And because Celery is a widely used queue scheduler, you’re not only limited to Python. Feel free to run it With your most popular Node.js, Ruby, or even PHP apps!
Web apps may be a huge consuming process when you need to compute large responses to some requests, like in API or personal cloud solutions.
Software designers often rely on caching solutions to speed up those apps. Some strategies were already available, like Memory Caching using APCu. For more advanced strategies you may want to rely on binary cache storages systems. Memcached is one of them, and a largely adopted solution, available for every web language.
To run your own Memcached instance, simply run it as a Service by attaching it to an available port in the range
8300-8499. Your private Memcached instance will be available for your web processes at
Need to self-host a solution to stay in touch with your team or loved ones? Mattermost is a free, open-source alternative to tools like Slack or Discord. As a messaging system, a Mattermost server needs to be continuously accessible.
With Services, you can now easily host a Mattermost instance that will be monitored by the platform, ensuring its availability 24/7!
Observability is a key for performance and issues improvements in terms of software engineering. Web apps are no exception. The most renowned solution available to measure, detect, and prevent interruption of services is Datadog which suits very-well to the Web ecosystem.
Running the Datadog’s agent on your cloud account may require advanced privileges you don’t have. Now you free to run it simply by following the Datadog Basic Agent Usage documentation without extra need. Run the
agent start process as a Service, and start to collect metrics on your apps!
The examples above are just a little part of the infinite possibilities unleashed by the Services features! You can expect to run an IRC Bouncer like ZNC, Messaging bots like Telegram BotFather, or a Virtual X Server like xvfb to interact with when using solutions depending on graphics rendering like QGIS Server.
So let’s grab a free account on the platform and start to run your own private Services right now!
We’re also curious about what you’ll want to run in headless mode. Let us know in the comments!