alwaysdata | blog Le blog pour suivre l'actualité alwaysdata Thu, 06 Dec 2018 08:48:59 +0000 en-US hourly 1 alwaysdata | blog 32 32 Meet our timer daemon, it’s a revolution! Tue, 04 Dec 2018 14:13:54 +0000 While ending this year, we release all developments that occurred during the last months, which are now ready to go in production. We’ve got a lot of new features to improve your experience with your alwaysdata’s account.

Some Web apps or services sometimes require you to run tasks periodically. From a WordPress blog where you want to rely on an OS scheduled tasks, to a broker that you need to purge from its messages queue periodically, or an RSS feed reader that you want it to retrieve its new articles at a given time, there’s plenty of use-cases.

The main point is: services may need to run commands, or get URLs, without any user interactions. Every time you face this case, you need to register a scheduled task.

Our platform use Debian as its base and you’ve got full access to your Unix account on the servers. It means you can register your tasks by editing your user’s crontab with crontab -e.

For us, it wasn’t enough. For two reasons. The first one is that it splits the administration logic between our admin panel and the Unix system. The second one is the crontab syntax so dull.

The bonus1) point is also that a task scheduled using crontab means its declaration stays on a server, which is not the server that runs your services2). When we move your account to another server3) we had to take care of potential cron tables, which breaks the unity and simplicity of our platform.

So, we added a Scheduled tasks support directly in your interface!

Administration Panel: Scheduled Tasks Interface
Administration Panel: Scheduled Tasks Interface

If you need to register a new task for any of your services, go to this new section in the administration panel and add a new task.

You have to fill in the form with few pieces of information:

  • The command(s) you want to run, or the URLs you want to GET: if you need to ping one or more URLs to trigger distant services, just fill them in; otherwise, declare the command line to execute.
  • The periodicity you want the task to repeat. You can specify a given time or a time interval. If this field isn’t powerful enough for your need, feel free to come back to the crontab syntax to precisely run your task.
Administration Panel: Create A New Task Interface
Administration Panel: Create A New Task Interface

That’s all. Your task is now registered, and the platform takes care to run it.

About the WordPress example aforementionned, you can register a task like php $HOME/wordpress/htdocs/wp cron event run --due-now that you can run every 10 minutes4).

All timer features are also available through the API, to allow you to easily register programmatically any scheduled task you need.

We do not rely on existing cron tools for those tasks, but instead use our own implementation: aljob. This for two reasons:

  1. Scheduled tasks aren’t executed right at the given time, but in a minute range. It means that a task scheduled at 00:00 will run at some point between 00:00:00 and 00:00:59. When you’ve got hundreds of tasks scheduled at midnight5), it prevents an overload of the servers.
  2. If you often run a long task (e.g. every 10 minutes), our daemon automatically detect that the previous task didn’t exited, and skip the new one, automatically preventing the overload and race conditions in your long loop execution.

Want funny stuff? You can run tasks even without any service installed in your account. Here’s a custom command I run every day:

curl -L "" | jq '.results | to_entries | .[].value | "\(.title) [\(.url)] (\(.relative_time))"' | mail -s "Daily news about Google"

It pings the DuckDuckGo news API and queries it about contents related to Google6), then filter the returned JSON to extract the news’ title, URL, and date. It finally compiles and sends it to my mail, so I’ve got my daily digest about Google related stuff in my mailbox.

Scheduled tasks panel is now available in your administration panel. We will remove the crontab support for shared hosting soon. If you rely on crontab for your tasks, they’ll be reported automatically to scheduled tasks. VPS and Dedicated hosting will have both scheduled tasks and native crontab available and can use whatever solution their prefer.

If you spot any improvement to this feature, feel free to report it. We’re also curious about what use you can have of the scheduled tasks, use comments to surprise us!

Other features are on their way to production. Stay tuned and keep an eye to our Twitter #advent thread as more news are coming.

Notes   [ + ]

1. and technical
2. our SSH servers and HTTP servers aren’t the same ones
3. which occur transparently every day and then
4. do not forget to disable the WordPress fake embedded cron by adding define('DISABLE_WP_CRON', true); in your wp-config.php file
5. like on our Shared hosting plans
6. yeah, I like to troll them all
]]> 0
Frameworks… everywhere… Thu, 29 Nov 2018 12:37:29 +0000 Two weeks ago, we announced a huge revamp of our marketplace. This new version allows you to install several Web apps effortlessly in 1-click into your account.

Since then, we1) continued our scripting work, here’s the result!


What the Web could be without the various languages frameworks we use every day to power our apps?

We’re Pythonistas here at alwaysdata, especially happy with the Django framework that power many components of our stack. However, we know that what makes the Web realm so exciting is its diversity. So we choose to give you the ability to deploy most of the known Web frameworks in 1-click!

We built Hello World examples for all of them, and that’s what you’ll deploy when installing it. Here’s the brand-new list:

Administration Panel: 1-click install, frameworks list
Administration Panel: 1-click install, frameworks list
  • CakePHP: Modern PHP 7 framework
  • Django: High-level Python Web framework that encourages rapid development and clean, pragmatic design
  • Express.js: Fast, unopinionated, minimalist web framework for Node.js
  • Flask: Microframework for Python based on Werkzeug, Jinja 2 and good intentions
  • Laravel: PHP Framework for Web Artisans
  • Macaron: A highly productive and modular web framework in Go
  • Phoenix: A productive web framework for Elixir
  • Ruby on Rails: Famous web application development framework written in Ruby
  • Sailor: A Lua MVC web framework
  • Sinatra: DSL for quickly creating web applications in Ruby
  • Symfony: PHP components and PHP framework for web projects
  • Zend: Professional PHP packages ready for PHP 7

Once installed, feel free to access your account remotely, from (s)FTP to WebDAV to work on it, make some tests, and unleash the power of the Web applications!

Also new apps

With this release of new scripts come new apps too! We added support for:

  • Connecthys: Open source Web portal for managing multi-activities
  • Kinto: JSON document store with sharing and synchronisation capabilities
  • Omeka (S and Classic): Web publishing platforms for sharing digital collections

More are coming soon, we continue to make the list growing everyday!

Frameworks are the second step for our marketplace. We’re always working at writing install scripts for new apps and Web solutions. Coming soon is the third article of the series that will bring you the whole power of this platform.

Stay tuned and, Ho!, it’s nearly December! Keep an eye on our Twitter feed as we’ve got many surprises for this advent period 🎁!

Notes   [ + ]

1. Big up again to Héloïse 🙏
]]> 0
Are you ready for a new place where everything starts? Tue, 13 Nov 2018 12:37:08 +0000 November is pretty much here. I guess it’s time to give you some feedback about our devs during last summer, and what we built around the pool.

A feature we wanted to rethink from scratch for a while is our 1-click install. We offer a simple way to deploy pre-provisioned app in your user space. Unfortunately, our architecture was a bit outdated to easily add new app, so we decided to rewrite it1).

The Marketplace: a new place to start your projects

Deploying new app often follows the same steps:

  1. Find the archive containing the app or service you want to install
  2. Download it in your user space, extract it, and navigate into the created folder
  3. Perform actions, from the installation to configuration, either from the CLI or a (web)GUI
  4. Create a new database that is missing for your app
  5. Configure it again
  6. Enable the %$@! missing extensions
  7. Configure it hopelessly for the last time
  8. Create a new site that points to your given app folder
  9. Fix permission
  10. Sometimes drop everything and restart from 1/ to fix it (╯°□°)╯︵ ┻━┻

So we built a solution to help you during this process. Using our 1-click install, you can deploy and automatically provision your app or service in your user space. No more need to configure system requirements, or manually download and install stuff. Just click on a button, and it’s done.

Administration Panel: 1-click install link
Administration Panel: 1-click install link

Alas, our initial architecture relied on automating some process through web-UI with interface scripting and some tools like PhantomJS. This latter is deprecated for a while, and all recent projects offer at least a CLI to manage the installation, configuration, and provisioning. It was time to rethink our solution to improve its reliability. This new architecture also allows us to provide you with new app, unable to support previously. Welcome to a new place where everything starts!

Find your preferred app

We just released our first production-ready new 1-click install platform. After testing it intensively during the last weeks, we’re now confident in its internal stability.

Administration panel screenshot: 1-click install interface<br /><small>(please note that rates in the screenshot are purely fictive for the sake of (realistic) contents)</small>
Administration panel screenshot: 1-click install interface
(please note that rates in the screenshot are purely fictive for the sake of (realistic) contents)

Another (huge) task was to port existing apps to the new platform. Releasing the new platform with fewer applications than before was a no-go. Our team2) did important work on it. Here’s the list of apps and frameworks you can currently deploy with the new 1-click platform:

  • DokuWiki: A simple wiki, ideal for team documentation
  • Drupal: A PHP CMS/Framework to manage your website content
  • Gitea: A lightweight, self-hostable, Github alternative for your code repositories and issues
  • Joomla: A commonly used PHP CMS
  • Magento: A well-known PHP e-Shop platform
  • MediaWiki: The most known Wiki, used to power Wikipedia
  • NeoFrag: An E-Sport oriented CMS
  • Nextcloud: A personal cloud that offers collaboration capabilities
  • PrestaShop: A simple e-Shop platform
  • Thelia: An E-Shop platform for medium to large businesses
  • Wallabag: An open source Read It Later solution
  • WordPress: The well-known CMS/Blog engine solution

Note: A few apps disappeared, like phpBB, because they don’t offer an inner solution to deploy them (like a CLI). Because those apps also suffer a lack of interest from the community, we decided to remove them.

What’s in the back, now?

As most recent Web applications offer improved ways to install and configure them, it sounds great to use their built-in capabilities. So we imagined a way to configure and script them easily.

Each script declares an environment and a variable configuration set using a YAML Front Matter.

    type: php
    php_version: '7.2'
    type: mysql

It allows the provisioning system to automagically create the website type and set it up, configure the interpreter with the needed extensions, preset the databases needed to run the service, and all. The last tarball is fetched3), and extracted in your user space.


set -e

composer global require wonderfulapp/console

php .composer/vendor/bin/wonderfulapp app:download --www="$INSTALL_PATH" default
php .composer/vendor/bin/wonderfulapp app:install --www="$INSTALL_PATH" --mysql-login="$DATABASE_USERNAME":"$DATABASE_PASSWORD" --mysql-host="$DATABASE_HOST" --mysql-database="$DATABASE_NAME" --skip-exists-check --drop default

rm -rf .composer

shopt -s dotglob nullglob

Then the script itself is executed to provision the application with the settings specific to your alwaysdata’s account. Generating configuration files, running database migration, fixing permission, and whatever other tedious deployment tasks: everything is automated.

That’s all; your app is now ready to use.

Wow, it sounds great!

Yeah, it is! For us, it offers many advantages:

  • It’s just scripting, so it’s easy to write new scripts to add new apps and frameworks to the platform.
  • We removed the tricky parts based on automating web-UI configuration, so the scripts are more robust and more testable.
  • They’re scripts, so a few basic knowledge of shell commands are sufficient to develop them. Not comfortable with Bash? Use any scripting language you prefer (Python, Ruby, etc.), and it will work as well!
  • We use the features offered by the editor to automate complex tasks, leaving the application itself maintaining compatibility between two versions.
  • Scripts are Shiny, Scripts are Bright, Scripts are the Children of the Unicorns.
  • Handle errors and fix them are a lot easier than with UI automation.
  • Did I say it’s just scripting?

Our goal is to provide you with the most common services and application, without breaking the whole install script each time the developer releases a new version. This new platform allows us to give this comfort to you, and we expect to bring you new app regularly.

A word about safety during deployment

Some automated tasks provide you with feedback, and we also handle those use cases. Here are the most important parts relying upon safety during the automated deployment process.

Random Passwords Generator

You often need to provide an admin password for your back end interface when installing new applications. Our script can generate a random password for you, or let you input your own password during the preset step of the process. Whatever you choose, your password is never stored by our platform. You stay the only owner of your own solution.

Note: We don’t store any sensitive information, but if your app relies on an unencrypted password in its configuration file, the password stays in its plain version. As the configuration file is stored in your user space with proper permission, it’s not a big deal about security. But if you consider it as a risk, please fill an issue in the app’s tracker to get encrypted password support ;).

Admin URLs, Custom configuration, and more

Some solutions allow you to customize your admin URL. We made some choices by default, to simply the deployment process. You’re even allowed to customize those URLs as you wish after the installation, to stick to your needs.

Finally, we just provision the deployed app with a safe default configuration. You can modify, update, and customize any settings right after the installation process.

Databases Creation

All mandatory databases are created automatically with a new user dedicated to your app and a random password. We do not store any information related to your database configuration during the process. Your data stays safe.

Logs (coming soon with the next version)

You can access the install logs generated during the automated process from your user space itself. If the install task fails for any reason, you can get more feedback about why it failed. If you want to send the report to anyone, the logs already miss all sensitive information, so neither username, paths, nor password appears in them.

We always want to give you a hosting platform that is simple and full of functionalities. We expect that this new feature will improve your deployment process and help you to deliver your content faster.

This blog post has a companion. In the next few days, we will provide you with a new article that will unleash the whole power of this 1-click install process #teasing.

During this time, what if you give us the applications you want to see available through this process? Use the comments or wave us by e-mail at to suggest new app to add.

Notes   [ + ]

1. spoiler: there’s another excellent reason to build a new architecture, more news in an upcoming post 😉
2. and especially Héloïse and Nicolas, warm thanks to them
3. either using a classical cURL or with a dependency manager if provided through this way
]]> 4
Teaching program for better IT courses Thu, 20 Sep 2018 15:38:11 +0000 Web architectures became more and more complex. Understanding how they work, and how to use them, is a big challenge. Our engineers started to work with Internet stuff a long time ago1), and if we had to learn everything from scratch today, we probably face an entirely different challenge. Because we had the chance to be there for a long time2), we think it’s our job to help newcomers to be able to learn in the best conditions.

x files nod GIF by The X-Files

Teaching program for teachers

Say you’re a teacher or an IT school, and you greet new students for a courses cycle. They will learn how Web applications or services work, how to develop them, how to push them in production. So you need to provide them with a production stack, meaning at least an HTTP server, probably a database, and the language you’re ready to teach them, from Node.js to Python. You really don’t want to spend an entire hour (at least) to help them to set up their environment. Nor to provide them with Virtual Machines which won’t run on everyone computers. And I don’t tell you about the fact they will need to deliver a project at the very end that you will have to test and review.

Why don’t rely on a ready-to-use solution to address the environment issue?

Here’s our plan with the teaching program. We want to make partnerships with people in charge of IT students. Using this program, teachers can provide their students with a full-provisioned environment.

To get it, just sign up for an account as a teacher, and open a ticket to ask us about the teacher plan, mentioning which course(s) and school you’re teaching for. We will update your profile to a teacher one. This profile allows you to:

  • create as many free plans as you need (one per student)
  • increase the limit of each plan to 200MB
  • use your permissions layer to give each student all rights to perform actions on its account by attaching its e-mail to
  • not get your IP blacklisted by our security layer when we will detect massive requests from all of your students

That’s all! All your students now have access to a dedicated free plan, where they can use all the features of our platform: languages, unlimited databases, as many sites as they need, etc. You, as a teacher, keep the control on their accounts and can revoke their access at any time.

This program is free of charge for teachers in public schools. Contact us if you’re a private institution to build a partnership!

And still 50% for students

We know that, as a student, we often need some hosting plan for our side-projects, portfolios, open source tools, etc. Your student or unemployed status doesn’t mean you must give up to high-quality service. That’s why we offer complete access to all our plans with a 50% discount for all students and unemployed people. Don’t hesitate to ask for it!

people you may know GIF by The Orchard Films

Notes   [ + ]

1. I personally built my first website more than 20 years ago
2. who said we’re dinos‽
]]> 0
Fall 2018 Events Tue, 18 Sep 2018 15:12:53 +0000 It’s already mid-September, which probably means you’re now back to work. It also implies it’s time for us to retake the road and greet you with a wave! More than ever, we consider as crucial to share with others our thoughts and feedback about technologies, security, and privacy.

Without further delay, here’s our already confirmed program for this fall 2018 edition:

  • Sept. 19: LTArena (Amiens, France), Privacy By Design(FR)
  • Sept. 27: La Tech Amienoise (Amiens, France), Zero Knowledge Architecture(FR)
  • Oct. 6–7: PyconFR, (Lille, France), La Crypto pour les devs(FR), Full-remote, guide de survie en environnement distant(FR)
  • Oct. 18–19: Connect.Tech (Atlanta, Georgia, USA), Crypto for devs(EN)
  • Oct. 24–25: Blend Web Mix (Lyon, France), Privacy By Design, the hard way(FR) ; here’s a quick introduction, in French:

We’re waiting some other events’ answers, like Capitole du Libre, POSS, DevDay… We expect to see you there too!

Organizing our events tour is a long-term task, that’s why we’re already planning 2019 events. If you want to see us in Quebec for the edition, please read our proposals and upvote them.

star trek hello GIF

See you there!

]]> 0
Custom logs Thu, 26 Jul 2018 11:37:58 +0000 Here’s our last blog post about new features in our reverse-proxy engine. Previously, we talked about WAF and HTTP Cache. Now it’s time to introduce you to custom logs.

Log GIF @Giphy

Upstreams logs

At alwaysdata, an upstream is an HTTP server our proxy use as a backend to serve pages to your visitors. An upstream can be a built-in HTTP server embed in your application, or a dedicated HTTP server like Apache or uWSGI.

Now, we write all output messages on standard streams1) to a dedicated file. This one is available in the ~/admin/logs/sites/ directory. Those logs allow developers and DevOps people to monitor and debug their applications running on our platform. When you rely on a custom upstream (like a Node.js service), you can get your application outputs. It allows you to find the glitch when a service refuses to start properly.

The same file hosts all written messages from all upstreams which belongs to the same alwaysdata account. Each upstream uses its PID2) to mark a line in the log file. PIDs allow you to retrieve which process (a.k.a. which upstream) currently output a line. This identifier is available between brackets after the date: [14/Jul/2018:10:04:21 +0200] [PID]. When an upstream end — e.g., when it stays idle for a long time — the given PID can be different when it restarts. Two lines are output in the log file each time an upstream wakes up. It allows you to match PID and upstream:

[14/Jul/2018:10:04:21 +0200] Upstream starting: /command/to/your/upstream ...
[14/Jul/2018:10:04:21 +0200] Upstream started PID: 12345

Access logs

Now, you can choose the name given to the access log files. To customize this entry, go to Sites → Edit → Logs.

Screenshot of website's logs customization view

You can also edit the output formats. If you need to parse your log files using a parser or a script, the custom output formats allow your log files to be suitable with your workflow. This field allows variable names between brackets {}. Their values substitute them at writing time. You can also free characters string. The syntax and variables available are documented in our logs page.

The default format is:

{request_hostname} {client_ip} - - [{completion_date:{%d/%b/%Y:%H:%M:%S %z}}] {request} {status} {response_size} {referer} {user_agent}

It returns this string: - - [16/Jul/2018:12:04:07 +0200] "GET /wp/ HTTP/1.1" 200 55380 "-" "curl/7.47.0"

To customize the output to provide the protocol, request duration and some characters strings, you may use the following syntax:

[{completion_date:{%d/%b/%Y:%H:%M:%S %z}}] protocol: {protocol} {request} duration: {duration} seconds

Which outputs:

[16/Jul/2018:12:04:07 +0200] protocol: "https" "GET /wp/ HTTP/1.1" duration: 0.134 seconds

With fully customizable access logs and an easier upstream debug, we design a more comfortable hosting platform. Monitoring and observing services on production at alwaysdata becomes painless.

This blog post is the last one about our new proxy’s features. We built our service for you and with you. Please help us again to improve it by giving us some feedback in comments to explain which features are missing for you!

jin yang handshake GIF by Silicon Valley @Giphy

Notes   [ + ]

1. stdout and stderr
2. Process IDentifier
]]> 0
HTTP Cache Wed, 25 Jul 2018 12:37:50 +0000 Here is our second article dedicated to our new reverse-proxy engine and its awesome features! After the Web Application Firewall, we now have a look to the HTTP cache built into our infrastructure.

punch it star trek GIF @Giphy

What is an HTTP cache?

A good blog post is a post with a chart

We have tested our WordPress blog performances, using our new HTTP cache built in our proxy. Here is the result, which let us bet that may like this new feature:

There’s a considerable improvement of the number of requests handled by the proxy when we enable the Cache. When we only serve 15 req/s without it, it increases to 2604 req/s. A 173 factor, for the same frame. The response time is also improved, and fall to 0.38ms instead of 63.65ms approximately. It’s interesting for a feature effortless to use!

We made this benchmark using ApacheBench, by requesting the blog homepage. We run each shoot1) four times, with and without Cache enabled, before compiling the results. Our blog stands on a dedicated server, but we expect a similar rate for shared hosting instances. You can make the test yourself by connecting to your account using SSH and running the ab command using the same options on your website.

How does it work?

A Cache is a temporary storage which can serve the cached results when requested. An HTTP Cache is a cache which can store web pages and assets. It is primarily used to decrease the charge of an upstream server when it must serve an often requested page without any modification between two requests.

When a client requests a page to a web server, this one generates an HTML response and send it to the client over the network. Before the response goes outside of the infrastructure, the HTTP Cache handles it and stores it in its memory before to let it go.

Caching a resource (schema)
Caching a resource when a new request happens (icons from The Noun Project)

When our proxy encounters the same request, it asks the Cache for an available version. If the page is available in the Cache memory, this one is served instead of asking the upstream server.

Serving a cached resource (schema)
Serving previously cached resource (icons from The Noun Project)

Use it at alwaysdata

If you want to use the HTTP Cache, you can enable it for any site individually in the Sites → Edit → Cache section. Tick the Enable the cache option.

Screenshot of the interface to enable cache for a web site

You must set the TTL for the pages served by this website. The TTL defines how many times the Cache retains the page before expiring it. You should choose it well. If we recommend a high TTL for a page that is not often modified, you must reduce it for highly dynamic content like a news website. If you set a too long duration, your visitors may see an expired page instead.

For instance, we need that any visitor sees a new homepage. When we publish a new article, the previous version of the homepage is then outdated. We then prefer to use a TTL between 5 and 10 seconds. This way, we ensure that we can use the high performances offered by the Cache with a relatively low risk of serving an old page.

This feature is currently in beta test and may evolves during the next weeks.

This feature need your application or website to be able to authorize the Cache to handle the requests. If resources aren’t explicitly marked as _cacheable_ by your app, there is a risk that our HTTP Cache can’t store them.

What’s behind the scene?

For technical people, here’s how we proceed to enable the cache. We chose to write with Python2) a module that follows the RFC 7234. A local Redis instance stores the cached resources. It allows us to manage the memory dedicated to the storage effortlessly.

We also chose to implement the HTTP PURGE verb. This method allows you to remove a cached version of a resource by calling it on its URL. Yan can then force to refresh the Cache easily.

After the performance, we made a significant effort on logging! In our next and last blog post, we introduce you to the new logging system that allows you to store custom formatted logs to allow you to debug your upstream applications effortlessly.

Notes   [ + ]

1. ab -c 10 -t 60
2. cause we ♥ Python at alwaysdata
]]> 2
Web Application Firewall (WAF) Tue, 24 Jul 2018 11:37:19 +0000 We had deployed on our production servers a new version of our HTTP reverse-proxy engine. It embeds a lot of new features. We cover them in this collection of blog posts.

Here, we want to introduce you to the Web Application Firewall (WAF) that is now built-in our reverse-proxy.

What’s a WAF?

All software have bugs. It’s even true for web applications. They may present security holes, that may compromise their integrity. Attackers may want to get a full-control over the web application. We call this kind of attack an infection. If they compromise the service by itself, the consequences may be dramatic, from a simple unavailable website to a leak of personal data.

Cybersecurity is a full-time job. By following some good practices, and by using a WAF, you may increase your security level. Good news! We now embed it directly inside our infrastructure, and using it is as simple as a click.

Gandalf GIF @Giphy

A Web Application Firewall is a firewall that protects your website from malicious requests. It parses the HTTP(s) requests and allows or deny them to access the server. It can block, alert or put in quarantine some of them if it considers it as malicious. It can also react to many attacks, to limit infections.

The request goes through the WAF to be analyzed. The firewall then decides to let the request go to the upstream server or to drop or isolate it (illustration)
The HTTP request goes through the WAF (icons from The Noun Project)

Modsecurity WAF

Instead of developing a new solution from scratch, we choose the ModSecurity WAF, developed by Trustwave SpiderLabs. This project has an excellent reputation concerning security. It’s also an open source project, so we can stick to our policy to give you a hosting platform only powered by open source solutions. Finally, the ModSecurity community is very active, and it increases the way the project evolves day-by-day.

ModSecurity is only a security engine. It uses some set of rules to analyze a request and mark it as malicious or not. We chose to use the open source ruleset from OWASP ModSecurity Core Rule Set (CRS) which offer an excellent level of protection for web applications. It also implements the OWASP Top 10 with a shallow level of false-positive.

Configuring a WAF at alwaysdata

For a high convenience, the built-in WAF is available for every website hosted at alwaysdata individually.

You find six profiles with various levels of protection:

  1. Disabled
  2. Basic
    • Force strict HTTP protocol
    • Detect malicious bots
  3. Strong
  4. Full
    • Strong profile
    • Detect attacks for PHP language
    • Detect attacks by Local File Injection (LFI)
    • Detect attacks by Remote File Injection (RFI)
  5. WordPress
    • Full profile
    • A WordPress’ specific ruleset
  6. Drupal
    • Full profile
    • A Drupal’s specific ruleset

Please note that activating your WAF may increase the latency time for every HTTP(s) request. This latency (about a few ms) increases with the robustness of the selected profile. It’s due to the parsing time of the request which increases with the number of OWASP rules to apply.

To use it, select a protection profile in the Sites → Edit → WAF section.

Screenshot of the interface allowing you to enable the WAF in your web site

This feature is in a beta state yet.

It’s our objective to give you a reliable, robust and safe environment for your hosting, without a mess of complexity. That’s why we want to give you a solid built-in WAF that you can enable with a simple click.

After security, performance! In our next blog post, we introduce you to our new HTTP cache and its impact on your websites delivery.

]]> 0
We believe in open source projects Fri, 08 Jun 2018 10:43:09 +0000 And all of a sudden, Microsoft is acquiring GitHub, infuriating the open source community in the last days. Behind the angry tweets, there’s a realization that a monopolistic situation may jeopardize the open source ecosystem. Seems about time to discuss why decentralization is necessary, and to present our initiative: the open source projects supporting.

Windy Mary Poppins GIF @Giphy

GitHub, Microsoft, Open source: why such a shitthunderstorm?

GitHub is a closed source platform based on an open source project: Git. Git is a distributed version control system. It allows developers, designers, editors, etc. to save their projects step by step on a timeline. They can keep track of what has been done, by whom, when, and they can roll-back some modifications if needed, or even start their projects again from a past-state1). While Git only brought the version control system, GitHub enriched it by adding social features like issue-tracking, documentation wikis, collaboration tools (the pull-requests), reviews, and many more.

GitHub is a SaaS2) solution, that is available in two flavors: freely for open source projects; under a fee or on-premise for private repositories in their enterprise edition. It quickly gained the attention from the open source community due to its simplicity of use, and to its “social” tools. It then grew as the central place for open source projects for the last ten years.

Maybe you’ve already noticed the paradox. Let’s start again: Git is a distributed system; GitHub has grown as a central place. The neverending war about decentralizing the Web. Year after year, because many projects use the platform, and because it’s easy, it became the place to be when you released your open source project. It gave visibility, quick and simple access to an upcoming community. It even became a way to distribute dependencies as code, many languages choosing to use it as a native backend (see Golang, Node.js, etc.)

However, a single place to host everything means you take a considerable risk if it fails. As Hubert Sablonnière said:

So it is. Or so it seemed to be for many defenders of the free and open source philosophy when Microsoft recently announced its intention to acquire GitHub. Thousands of open source contributors have been there for many years, and many of them have seen how Microsoft acted in the past. Even if Microsoft is now quite involved in open source contributions, some of them are frightened by this announcement, and start to think to what Microsoft has in mind for the future of GitHub. So far, they mainly spoke about cloud deployment integration, but what will happen to GitHub’s driven projects like Hubot, electron, Atom, etc.?.

This acquisition painfully revives the debate around decentralization, this time from the developer’s side. Maybe some projects will independently host their sources to stay away from the GAFAM and their monopoly. Do not forget that Git is decentralized by default3). GitHub is only a platform. The choice is still yours.

So, what is alwaysdata doing for open source?

We believe in open source. We based our solution on the Django framework; every third-party software we use for hosting is open source; we sometimes release some of our internal devs (see Deploy at lightning speed with Git hooks). It was time to give back to the community: this is why we offer a free 10Go plan for open source projects.

We think that many alternatives are more valuable than a single offer, even if this one is great. Earlier this year, we made a test with the Sailor framework project: as we are one of the only hosting providers to bring support for Lua natively, they contacted us to know if we were able to conclude a partnership. We then started to think about what we could do for open source. We already had a 100Mo free plan, which was a bit small for open source projects; what if we offered a 10Go plan, for free?

Open source projects, this is our contribution: if you need to find a way to host your project (repositories, websites, demos, etc.), you can do it on alwaysdata for free. We will never charge you for anything. We limit the account to host open source, active, projects only.

We don’t expect the whole open source community comes to alwaysdata4). But the world needs as many good-minded, respectful alternative to other OSS compliant hosting solution as there may be. If you want to benefit from this offer, get in touch at!

May the source be with you.

Notes   [ + ]

1. it’s a very simplistic point of view, Git is even more powerful
2. Software as a Service
3. yes, you can have multiple remotes for a repository, and if you host all the history of your project locally, you don’t have to stick to a platform. Did you know it?
4. but we welcome any project who think our offer fit its needs
]]> 2
About our community Tue, 15 May 2018 11:37:49 +0000 alwaysdata started in 2006 because we, as a web agency, hadn’t been able to found a hosting platform capable of fitting our needs. More than ten years later, and now that hosting in our core business, we’re still here with thousands of websites and applications hosted on our infrastructure, daily served to millions of users. Behind alwaysdata, there is a team of cool people that want to continue to do what they love to do: their best to provide you a sharped hosting platform. And have fun. A lot.

The IT Crowd @Gifbin

So, because our adventure could not exist without you, we decided to make more things (and more fun) with our community. Here’s a list of our first partnerships, the ones which open the way to many more projects!

Ready for space? Get your Cyberspace Building Crew seat!

Last year, Julien Dubedout, a French designer who works on several projects (like the Caliopen messaging client) had a discussion in which few people imagined some logos for Web workers, inspired by the NASA Mission Patches. Several months later, he started a crowdfunding campaign on Kickstarter for the Cyberspace Building Crew project to make those patches and stickers reality.

Unfortunately, the initial batch of designs misses the big one for us at alwaysdata: the one who represent hosting. We quickly decided to fix it! After ensuring Julien was OK to design a new patch for our job, we had unlocked a Sponsor pack to allow him to create the design for hosters. Here’s what he drew for all hosters: we’re now the Space Station for Web workers!

The cyberspace building crew patch for hosters: the space station

The pack allows us to get a bunch of those design both in patches and stickers formats. Feel free to get in touch during conferences to get one when they’re ready.

About conferences

As Web workers, we are constantly aware of new technologies, new frameworks, new solutions that will help you, developers, to build your services. We want to provide you the best support for the technologies you need on your backend. To be able to give you this support, we are learning a lot of things, and we’re doing a massive technological watch.

Giving you feedback of what we learned is essential for our team, and that’s why we are currently writing more articles here, about various technical subjects. We’re also increasing our participation in many events, as attendees and as speakers.

This isn’t good enough for us. We believe in diversity as well. So we decided to bring support to communities involved in diversity in tech. For the moment, it will consist of tickets we will offer for the conferences we are going to, starting with the upcoming and Web2Day events. We are organizing the last details to distribute those tickets. We will keep you informed on both Twitter and here, so stay tuned!

Press Newspaper GIF @Giphy

If you’re involved in communities that are engaged in diversity in tech, please get in touch in comments or at, so we could find a way to work together.

Last news: a new bunch of cool alwaysdata t-shirts is coming, try to find m4dz in tech events to get one 😉

]]> 0