alwaysdata | blog Le blog pour suivre l'actualité alwaysdata Tue, 07 Aug 2018 15:09:58 +0000 en-US hourly 1 alwaysdata | blog 32 32 Custom logs Thu, 26 Jul 2018 11:37:58 +0000 Here’s our last blog post about new features in our reverse-proxy engine. Previously, we talked about WAF and HTTP Cache. Now it’s time to introduce you to custom logs.

Log GIF @Giphy

Upstreams logs

At alwaysdata, an upstream is an HTTP server our proxy use as a backend to serve pages to your visitors. An upstream can be a built-in HTTP server embed in your application, or a dedicated HTTP server like Apache or uWSGI.

Now, we write all output messages on standard streams1) to a dedicated file. This one is available in the ~/admin/logs/sites/ directory. Those logs allow developers and DevOps people to monitor and debug their applications running on our platform. When you rely on a custom upstream (like a Node.js service), you can get your application outputs. It allows you to find the glitch when a service refuses to start properly.

The same file hosts all written messages from all upstreams which belongs to the same alwaysdata account. Each upstream uses its PID2) to mark a line in the log file. PIDs allow you to retrieve which process (a.k.a. which upstream) currently output a line. This identifier is available between brackets after the date: [14/Jul/2018:10:04:21 +0200] [PID]. When an upstream end — e.g., when it stays idle for a long time — the given PID can be different when it restarts. Two lines are output in the log file each time an upstream wakes up. It allows you to match PID and upstream:

[14/Jul/2018:10:04:21 +0200] Upstream starting: /command/to/your/upstream ...
[14/Jul/2018:10:04:21 +0200] Upstream started PID: 12345

Access logs

Now, you can choose the name given to the access log files. To customize this entry, go to Sites → Edit → Logs.

Screenshot of website's logs customization view

You can also edit the output formats. If you need to parse your log files using a parser or a script, the custom output formats allow your log files to be suitable with your workflow. This field allows variable names between brackets {}. Their values substitute them at writing time. You can also free characters string. The syntax and variables available are documented in our logs page.

The default format is:

{request_hostname} {client_ip} {default} {default} [{completion_date:{%d/%b/%Y:%H:%M:%S %z}}] {request} {status} {response_size} {referer} {user_agent}

It returns this string: - - [16/Jul/2018:12:04:07 +0200] "GET /wp/ HTTP/1.1" 200 55380 "-" "curl/7.47.0"

To customize the output to provide the protocol, request duration and some characters strings, you may use the following syntax:

[{completion_date:{%d/%b/%Y:%H:%M:%S %z}}] protocol: {protocol} {request} duration: {duration} seconds

Which outputs:

[16/Jul/2018:12:04:07 +0200] protocol: "https" "GET /wp/ HTTP/1.1" duration: 0.134 seconds

With fully customizable access logs and an easier upstream debug, we design a more comfortable hosting platform. Monitoring and observing services on production at alwaysdata becomes painless.

This blog post is the last one about our new proxy’s features. We built our service for you and with you. Please help us again to improve it by giving us some feedback in comments to explain which features are missing for you!

jin yang handshake GIF by Silicon Valley @Giphy

Notes   [ + ]

1. stdout and stderr
2. Process IDentifier
]]> 0
HTTP Cache Wed, 25 Jul 2018 12:37:50 +0000 Here is our second article dedicated to our new reverse-proxy engine and its awesome features! After the Web Application Firewall, we now have a look to the HTTP cache built into our infrastructure.

punch it star trek GIF @Giphy

What is an HTTP cache?

A good blog post is a post with a chart

We have tested our WordPress blog performances, using our new HTTP cache built in our proxy. Here is the result, which let us bet that may like this new feature:

There’s a considerable improvement of the number of requests handled by the proxy when we enable the Cache. When we only serve 15 req/s without it, it increases to 2604 req/s. A 173 factor, for the same frame. The response time is also improved, and fall to 0.38ms instead of 63.65ms approximately. It’s interesting for a feature effortless to use!

We made this benchmark using ApacheBench, by requesting the blog homepage. We run each shoot1) four times, with and without Cache enabled, before compiling the results. Our blog stands on a dedicated server, but we expect a similar rate for shared hosting instances. You can make the test yourself by connecting to your account using SSH and running the ab command using the same options on your website.

How does it work?

A Cache is a temporary storage which can serve the cached results when requested. An HTTP Cache is a cache which can store web pages and assets. It is primarily used to decrease the charge of an upstream server when it must serve an often requested page without any modification between two requests.

When a client requests a page to a web server, this one generates an HTML response and send it to the client over the network. Before the response goes outside of the infrastructure, the HTTP Cache handles it and stores it in its memory before to let it go.

Caching a resource (schema)
Caching a resource when a new request happens (icons from The Noun Project)

When our proxy encounters the same request, it asks the Cache for an available version. If the page is available in the Cache memory, this one is served instead of asking the upstream server.

Serving a cached resource (schema)
Serving previously cached resource (icons from The Noun Project)

Use it at alwaysdata

If you want to use the HTTP Cache, you can enable it for any site individually in the Sites → Edit → Cache section. Tick the Enable the cache option.

Screenshot of the interface to enable cache for a web site

You must set the TTL for the pages served by this website. The TTL defines how many times the Cache retains the page before expiring it. You should choose it well. If we recommend a high TTL for a page that is not often modified, you must reduce it for highly dynamic content like a news website. If you set a too long duration, your visitors may see an expired page instead.

For instance, we need that any visitor sees a new homepage. When we publish a new article, the previous version of the homepage is then outdated. We then prefer to use a TTL between 5 and 10 seconds. This way, we ensure that we can use the high performances offered by the Cache with a relatively low risk of serving an old page.

This feature is currently in beta test and may evolves during the next weeks.

This feature need your application or website to be able to authorize the Cache to handle the requests. If resources aren’t explicitly marked as _cacheable_ by your app, there is a risk that our HTTP Cache can’t store them.

What’s behind the scene?

For technical people, here’s how we proceed to enable the cache. We chose to write with Python2) a module that follows the RFC 7234. A local Redis instance stores the cached resources. It allows us to manage the memory dedicated to the storage effortlessly.

We also chose to implement the HTTP PURGE verb. This method allows you to remove a cached version of a resource by calling it on its URL. Yan can then force to refresh the Cache easily.

After the performance, we made a significant effort on logging! In our next and last blog post, we introduce you to the new logging system that allows you to store custom formatted logs to allow you to debug your upstream applications effortlessly.

Notes   [ + ]

1. ab -c 10 -t 60
2. cause we ♥ Python at alwaysdata
]]> 0
Web Application Firewall (WAF) Tue, 24 Jul 2018 11:37:19 +0000 We had deployed on our production servers a new version of our HTTP reverse-proxy engine. It embeds a lot of new features. We cover them in this collection of blog posts.

Here, we want to introduce you to the Web Application Firewall (WAF) that is now built-in our reverse-proxy.

What’s a WAF?

All software have bugs. It’s even true for web applications. They may present security holes, that may compromise their integrity. Attackers may want to get a full-control over the web application. We call this kind of attack an infection. If they compromise the service by itself, the consequences may be dramatic, from a simple unavailable website to a leak of personal data.

Cybersecurity is a full-time job. By following some good practices, and by using a WAF, you may increase your security level. Good news! We now embed it directly inside our infrastructure, and using it is as simple as a click.

Gandalf GIF @Giphy

A Web Application Firewall is a firewall that protects your website from malicious requests. It parses the HTTP(s) requests and allows or deny them to access the server. It can block, alert or put in quarantine some of them if it considers it as malicious. It can also react to many attacks, to limit infections.

The request goes through the WAF to be analyzed. The firewall then decides to let the request go to the upstream server or to drop or isolate it (illustration)
The HTTP request goes through the WAF (icons from The Noun Project)

Modsecurity WAF

Instead of developing a new solution from scratch, we choose the ModSecurity WAF, developed by Trustwave SpiderLabs. This project has an excellent reputation concerning security. It’s also an open source project, so we can stick to our policy to give you a hosting platform only powered by open source solutions. Finally, the ModSecurity community is very active, and it increases the way the project evolves day-by-day.

ModSecurity is only a security engine. It uses some set of rules to analyze a request and mark it as malicious or not. We chose to use the open source ruleset from OWASP ModSecurity Core Rule Set (CRS) which offer an excellent level of protection for web applications. It also implements the OWASP Top 10 with a shallow level of false-positive.

Configuring a WAF at alwaysdata

For a high convenience, the built-in WAF is available for every website hosted at alwaysdata individually.

You find six profiles with various levels of protection:

  1. Disabled
  2. Basic
    • Force strict HTTP protocol
    • Detect malicious bots
  3. Strong
  4. Full
    • Strong profile
    • Detect attacks for PHP language
    • Detect attacks by Local File Injection (LFI)
    • Detect attacks by Remote File Injection (RFI)
  5. WordPress
    • Full profile
    • A WordPress’ specific ruleset
  6. Drupal
    • Full profile
    • A Drupal’s specific ruleset

Please note that activating your WAF may increase the latency time for every HTTP(s) request. This latency (about a few ms) increases with the robustness of the selected profile. It’s due to the parsing time of the request which increases with the number of OWASP rules to apply.

To use it, select a protection profile in the Sites → Edit → WAF section.

Screenshot of the interface allowing you to enable the WAF in your web site

This feature is in a beta state yet.

It’s our objective to give you a reliable, robust and safe environment for your hosting, without a mess of complexity. That’s why we want to give you a solid built-in WAF that you can enable with a simple click.

After security, performance! In our next blog post, we introduce you to our new HTTP cache and its impact on your websites delivery.

]]> 0
We believe in open source projects Fri, 08 Jun 2018 10:43:09 +0000 And all of a sudden, Microsoft is acquiring GitHub, infuriating the open source community in the last days. Behind the angry tweets, there’s a realization that a monopolistic situation may jeopardize the open source ecosystem. Seems about time to discuss why decentralization is necessary, and to present our initiative: the open source projects supporting.

Windy Mary Poppins GIF @Giphy

GitHub, Microsoft, Open source: why such a shitthunderstorm?

GitHub is a closed source platform based on an open source project: Git. Git is a distributed version control system. It allows developers, designers, editors, etc. to save their projects step by step on a timeline. They can keep track of what has been done, by whom, when, and they can roll-back some modifications if needed, or even start their projects again from a past-state1). While Git only brought the version control system, GitHub enriched it by adding social features like issue-tracking, documentation wikis, collaboration tools (the pull-requests), reviews, and many more.

GitHub is a SaaS2) solution, that is available in two flavors: freely for open source projects; under a fee or on-premise for private repositories in their enterprise edition. It quickly gained the attention from the open source community due to its simplicity of use, and to its “social” tools. It then grew as the central place for open source projects for the last ten years.

Maybe you’ve already noticed the paradox. Let’s start again: Git is a distributed system; GitHub has grown as a central place. The neverending war about decentralizing the Web. Year after year, because many projects use the platform, and because it’s easy, it became the place to be when you released your open source project. It gave visibility, quick and simple access to an upcoming community. It even became a way to distribute dependencies as code, many languages choosing to use it as a native backend (see Golang, Node.js, etc.)

However, a single place to host everything means you take a considerable risk if it fails. As Hubert Sablonnière said:

So it is. Or so it seemed to be for many defenders of the free and open source philosophy when Microsoft recently announced its intention to acquire GitHub. Thousands of open source contributors have been there for many years, and many of them have seen how Microsoft acted in the past. Even if Microsoft is now quite involved in open source contributions, some of them are frightened by this announcement, and start to think to what Microsoft has in mind for the future of GitHub. So far, they mainly spoke about cloud deployment integration, but what will happen to GitHub’s driven projects like Hubot, electron, Atom, etc.?.

This acquisition painfully revives the debate around decentralization, this time from the developer’s side. Maybe some projects will independently host their sources to stay away from the GAFAM and their monopoly. Do not forget that Git is decentralized by default3). GitHub is only a platform. The choice is still yours.

So, what is alwaysdata doing for open source?

We believe in open source. We based our solution on the Django framework; every third-party software we use for hosting is open source; we sometimes release some of our internal devs (see Deploy at lightning speed with Git hooks). It was time to give back to the community: this is why we offer a free 10Go plan for open source projects.

We think that many alternatives are more valuable than a single offer, even if this one is great. Earlier this year, we made a test with the Sailor framework project: as we are one of the only hosting providers to bring support for Lua natively, they contacted us to know if we were able to conclude a partnership. We then started to think about what we could do for open source. We already had a 100Mo free plan, which was a bit small for open source projects; what if we offered a 10Go plan, for free?

Open source projects, this is our contribution: if you need to find a way to host your project (repositories, websites, demos, etc.), you can do it on alwaysdata for free. We will never charge you for anything. We limit the account to host open source, active, projects only.

We don’t expect the whole open source community comes to alwaysdata4). But the world needs as many good-minded, respectful alternative to other OSS compliant hosting solution as there may be. If you want to benefit from this offer, get in touch at!

May the source be with you.

Notes   [ + ]

1. it’s a very simplistic point of view, Git is even more powerful
2. Software as a Service
3. yes, you can have multiple remotes for a repository, and if you host all the history of your project locally, you don’t have to stick to a platform. Did you know it?
4. but we welcome any project who think our offer fit its needs
]]> 2
About our community Tue, 15 May 2018 11:37:49 +0000 alwaysdata started in 2006 because we, as a web agency, hadn’t been able to found a hosting platform capable of fitting our needs. More than ten years later, and now that hosting in our core business, we’re still here with thousands of websites and applications hosted on our infrastructure, daily served to millions of users. Behind alwaysdata, there is a team of cool people that want to continue to do what they love to do: their best to provide you a sharped hosting platform. And have fun. A lot.

The IT Crowd @Gifbin

So, because our adventure could not exist without you, we decided to make more things (and more fun) with our community. Here’s a list of our first partnerships, the ones which open the way to many more projects!

Ready for space? Get your Cyberspace Building Crew seat!

Last year, Julien Dubedout, a French designer who works on several projects (like the Caliopen messaging client) had a discussion in which few people imagined some logos for Web workers, inspired by the NASA Mission Patches. Several months later, he started a crowdfunding campaign on Kickstarter for the Cyberspace Building Crew project to make those patches and stickers reality.

Unfortunately, the initial batch of designs misses the big one for us at alwaysdata: the one who represent hosting. We quickly decided to fix it! After ensuring Julien was OK to design a new patch for our job, we had unlocked a Sponsor pack to allow him to create the design for hosters. Here’s what he drew for all hosters: we’re now the Space Station for Web workers!

The cyberspace building crew patch for hosters: the space station

The pack allows us to get a bunch of those design both in patches and stickers formats. Feel free to get in touch during conferences to get one when they’re ready.

About conferences

As Web workers, we are constantly aware of new technologies, new frameworks, new solutions that will help you, developers, to build your services. We want to provide you the best support for the technologies you need on your backend. To be able to give you this support, we are learning a lot of things, and we’re doing a massive technological watch.

Giving you feedback of what we learned is essential for our team, and that’s why we are currently writing more articles here, about various technical subjects. We’re also increasing our participation in many events, as attendees and as speakers.

This isn’t good enough for us. We believe in diversity as well. So we decided to bring support to communities involved in diversity in tech. For the moment, it will consist of tickets we will offer for the conferences we are going to, starting with the upcoming and Web2Day events. We are organizing the last details to distribute those tickets. We will keep you informed on both Twitter and here, so stay tuned!

Press Newspaper GIF @Giphy

If you’re involved in communities that are engaged in diversity in tech, please get in touch in comments or at, so we could find a way to work together.

Last news: a new bunch of cool alwaysdata t-shirts is coming, try to find m4dz in tech events to get one 😉

]]> 0
GDPR at alwaysdata: what does it imply? Mon, 07 May 2018 14:20:55 +0000 Voted in 2016 by the EU Parliament, the new General Data Protection Regulation becomes enforceable on May 25, 2018. This new regulation is an essential change in European data protection law, and replace the EU Data Protection Directive (Directive 95/46/EC) as well as the local laws relating to data protection.

We, as a hosting provider, are involved in the GDPR and ensure our services are compliant with the terms of the regulation by May 2018. As we already claim, we strongly believe in privacy, and we encourage initiatives that increase the fundamental right of privacy for all citizens of the World.

To get our customers informed about what is the GDPR, and how it applies to alwaysdata services, here’s our digest about it.


The GDPR itself introduces some terms that may need some explanation. Here’s a lexicon of what is in use in the regulation, our TOS, and this article as well.

Personal Data
It defines any information related to an identified or unidentified natural person individually. It includes as well as civil data (birthdate, address, etc.) as technical data (IP, GPS coordinates, etc.)
Data Controller
The Controller is a natural or legal person, public authority, agency, or any body which determines the purposes and means of the processing of personal data. It is the one who decides what to do with the data.
Data Processor
The Processor is any body which processes the personal data on behalf of the Controller
The Data Protection Officer (DPO) is the person who, inside any company, ensure data processing operation compliance with all applicable European regulations. The DPO is entirely independent of the others company’s operations.
Any partner that, for purposes of the personal data processing, are mandated by the data processor and may have access to personal data transmitted by the processor. It must be GDPR compliant too, and the client must have been informed that the subcontractor may access its personal data.

What is the GDPR?

The General Data Protection Regulation (GDPR) is the new European privacy law that replaces any existing law about data privacy in the EU territory. It takes precedence on any local law as well as the EU Data Protection Directive. It doesn’t introduce significant changes but is intended to enhance and harmonize EU data protection laws for any EU citizen. It applies worldwide if your data is located inside the EU, or if your service manipulates EU citizen’s personal data. It becomes enforceable on May 25, 2018.

It mainly describes some topics:


Any process of personal data is prohibited unless expressly permitted.


Companies may only collect and process personal data for specific purposes. They must be clearly outlined, and the future use of the data must be documented.

Data minimization

Companies must collect as little data as possible, and as much as necessary. It also means that “blind” data collection for unspecified future purposes is prohibited.


The data processing should be understandable and comprehensible to anyone that is concerned. Companies are required to provide crystal clear information about what data they use, and for what purposes they use it.


Companies need to ensure and prove that they technically protect the personal data of their clients and employees. It means that data must be protected against unauthorized processing, alteration, theft, destruction, etc.

Who does the GDRP apply to?

The GDPR applies to any company or organization operating in the EU, or processing EU citizen’s personal data.

What role alwaysdata will endorse in the GDPR context?

alwaysdata can be considered both as a data processor and a data controller. The former because most frequently we just “processing” our customer’s data behalf of their control, as we host your apps, websites, and services; the latter because we also own information about our clients in our system, i.e., for your contracts information.

alwaysdata commitments as a processor

  • We won’t process any data without the explicit order of the data controller.
  • Data are kept inside EU; provided customers do not select a location in a geographical area outside the EU. This location may evolve, always under the controller of the customer itself.
  • We inform customers of any enlisted subcontractor which access their data, which data is concerned and for what purposes.
  • We apply security standards to protect data lifecycle.
  • We will report publicly any incident in case of data breach without undue delay.
  • We provide documentation to prove our conformity to GDPR.

alwaysdata commitments as a data controller

  • We limit the personal data collected to what is strictly necessary when you order a service, for billing or support purposes.
  • We only use personal data to what it is contractually intended.
  • We do not keep data when it’s not relevant.
  • We do not transfer data outside of EU without your explicit consent.
  • We implement appropriate technical measures to ensure a high degree of security on personal data.

alwaysdata’s security measures

We distinguish two kinds of security measures: those who concern the data stored by the customer, and the security of the infrastructures that store the information.

About data stored by our customers, the customer itself is solely responsible for its data, by ensuring the security of its service, website, application, and whatever it deploys on the alwaysdata infrastructure.

About alwaysdata’s infrastructures, we are committed to ensure optimal security. It means that physical access to the systems is strongly regulated; software are monitored, patched, and updated with security releases; and we use technical devices to prevent attacks and intrusions.

What has alwaysdata been doing to prepare for GDPR

alwaysdata was already compliant with the 95/46/EC Directive; we have built on the existing. We mainly worked on a new version of our TOS to ensure they have reflected our obligations regarding the GDPR. Technically, we have been following the Privacy by Design principles since we started, and we have never collected any information we didn’t explicitly need. We have been GDPR compliant for a while now. Time for the compliance gig!

The principles of Privacy by Design and Privacy by Default

We already have design and development processes that are Privacy by Design and Privacy by Default compliant. Privacy has been one of our main concerns for a while now; you can see a talk given by m4dz, our tech evangelist, at the last Breizhcamp 2018 edition, about Privacy by Design[fr].

Extensive information rights, and right to deletion

We already delete personal data at account deletion. We keep nothing on alwaysdata’s side, except what is needed for legal aspects, like data transactions and logs, for a limited amount of time.

The right to data portability

Our already available for a while API allows you to access all your information stored in the alwaysdata’s system. You can also ask us about your personal data by sending a mail to

No linking of consents

We do not transmit any information to any subcontractor rather than the information mentioned in our TOS for support purposes, without your explicit consent.

What must customers do?

As far as alwaysdata is concerned, you have nothing to do besides reading and accept our new TOS. We also strongly encourage our customers and partners to start preparing for the GDPR now. If you already have robust security and good data privacy practices, the shift should be simple.

You can access our new TOS here. Feel free to ask in the comments if you have any question about the EU Data Protection in alwaysdata.

]]> 0
We’ll be there: events and conferences in next may and june 2018 Fri, 04 May 2018 08:59:33 +0000 It’s been a long time since our last post. We are actively preparing new releases and articles. We’re also involved in many events and conferences we need to prepare.

To be sure none of you will miss the rendezvous, here a quick list of where we will be:

  • May 17, Sophia Antipolis: RivieraDev, La Crypto pour les devs (regular talk, FR)
  • June 1, Paris: DotScale, Automate deployment using Git, compared to other solutions (lightning talk, EN)
  • June 6, Paris: Vue.js meetup, Use Atomic Design in Vue.js components development cycle (regular talk)
  • June 8, Lille: Takeoff Conf, From ground control to the moon: choose your hosting partner (regular talk, EN)
  • June 10, Lille: DjangoCong, La Crypto pour les devs update: Déploiement continu et feature-flipping, uniquement avec Git (regular talk, FR)
  • June 15, Nantes: Web2Day, Privacy by Design (regular talk, FR)
  • June 28–29, Choisy-le-Roi: Pas Sage en Seine (PSES) Architecture Zero Knowledge et Webapps : est-ce possible ? (regular talk, FR); Full-remote : guide de survie en environnement distant (regular talk, FR) ; Privacy by design (regular talk, FR)

@m4d_z, our Tech Evangelist, will talk at the events above. Get in touch with him if you’re there!

robert downey jr handshake GIF @Giphy
]]> 0
Secured remote access, the hard way Tue, 27 Mar 2018 21:01:31 +0000 I recently told you about the choices you have to make for your sysadmin stack. In fact, these choices impact your whole technical chain, from your OS and its setup to the system architecture. It means you have to adjust the cursor between maintaining each element by yourself or relying on a partner to manage them for you.

Securing remote access goes from trivial things to less-understandable points for non-experts. Consider this article as primary guidance, and feel free to browse some links to more in-depth articles1)2).

Cat GIF @Giphy

TL;DR: security is a vast and complex realm of knowledge, and may be complex. If you only want to know how to improve your remote access setup to alwaysdata servers, read the section about using secured methods.

Don’t let anybody enter your home

You run your application on remote servers. It allows you to provide access to your service from anywhere. There’s usually no physical access for you to those machines3). So, you use remote access to those machines. Most of the time, it’s through a remote shell, something like SSH. What SSH does is to provide you access to both a shell and a low-level files-manager on your remote machine. Consider it as a door to your living room. Not taking care of it is a nonsense. Never do this at home, kids.

An important security concern is the way you manage remote access to your production servers. Here too, you may want to do it, or you may prefer to use a service that embeds a secured setup for you. At alwaysdata, we think that using a service that does it for you is often the best choice, but some parts remain for you to handle. Because this part is sometimes difficult to understand and to maintain, we want to give you some advice on how to secure your access correctly.

The right level of security is the one you’re able to understand and to manage. So, to make sure you’ve got all the expertise needed to be safe at home, let’s talk a bit about what you have to know to protect your remote access and to avoid giving anybody access. My goal is not to turn you into a security expert; it is merely to ensure you know the basics (and probably a bit more) about security and why it matters.

Fortunately, common SSH servers come with a default configuration mostly secured. OpenSSH, which is the default SSH server in most platforms, does it. But distributing a default configuration means that maintainers need to address the common use-cases rather than the high-level secured setup. This setup will probably lock out 90% of the malicious users. Think of it as adding a standard robust door to your mansion; it is better than nothing, but it is still not an armored door. We want to consider how to strengthen this default setup.

Be Safe, Be Smart

Securing its remote access implies to take care of a few digital concerns, mainly because you stay in charge of the client parts. Even if you have a properly secured server, you may get into troubles if your client-side protection is too weak. Let’s take a look at your responsibility.

Use SSH Keys to connect

We would recommend using SSH key rather than password authentication for SSH remote access. This way, you can efficiently manage who can access your remote server, on which device, and revoke access in case of emergency4). It’s a good idea to have one pair of keys5) for each user on each device. It gives you more flexibility to revoke access when a device is lost or stolen, instead of revoking the complete user access.

To know how to generate a secure, updated key, take a look at the ed25519 key format section.

Avoid password connection

Passwords can’t be secured enough to be the only protection against malicious access. You should use a key-based login as mentioned above to make sure that your remote user is protected, e.g., from a brute-force attack that will try to crack your user’s password.

In alwaysdata, the SSH connection using password is disabled by default, and only connections using SSH keys is allowed. You must have to write your public key in the ~/.ssh/authorized_keys file manually6). If you need to enable password-access nonetheless (e.g., to transfer your public key first), check the box to authorize connection using password. Please keep in mind that it’s a less-secured method, so turn it off when you don’t need it.

If you run your personal server, after copying your public key on the server, disable password-based login in sshd configuration.

# /etc/ssh/sshd_config

AuthenticationMethods public key
PubkeyAuthentication yes
ChallengeResponseAuthentication no
PasswordAuthentication no
UsePAM no

It also seems that some users/administrators think that not protecting their accounts with a password is a good idea. Personally, I think that even the weakest password is better than no password at all. In any case, you must avoid allowing those no-password users to log in remotely without any protection.

# /etc/ssh/sshd_config

PermitEmptyPasswords no

We forbid empty passwords on alwaysdata servers.

Use strong passphrases

Using strong keys is essential, but you can’t rely on them without securing them. It means that each SSH key must be passphrase-protected with a unique, dedicated, reliable password. It also applies to your user’s login access, so feel free to use the same method to set up your user’s password.

I have two methods to generate random passphrases in CLI7):

$ openssl rand -base64 32
$ gpg --gen-random --armor 1 32

You should have openssl rand or gpg (or both) installed on your system. Make sure to check your password resilience against brute-force attacks with the How Secure Is My Password tool. You don’t want a weak password to compromise your whole business security.

Use cryptographic hardware

If you’re concerned about secure remote access, you should consider using certificate client authentication coupled with a hardware device, so you keep it safe from attackers and unaccessible from your computer. You can then use some cryptographic hardware device like Nitrokey Pro or YubiKey 4 to store your keys/certificates and manage the client authentication for you.


Stay up-to-date

Keep your SSH server updated

SSH security breaches are regularly discovered, and patched quickly by the maintainers and contributors of the projects. Keeping your server updated is necessary. It also implies you keep your SSH client up-to-date too. As I’m writing this blog post, the current OpenSSH version is 7.6, released Oct. 3, 2017. Check your current version:

$ ssh -V
OpenSSH_7.6p1, OpenSSL 1.1.0g  2 Nov 2017

Use SSH v2 protocol

SSH comes in two versions of its protocol. The current, modern one, is the v2. It is more secure, more flexible, and even more-encrypted in all steps of the process. Almost every client has supported the v2 for a long time, so feel free to disable v1 and only serve the latest if you run your server.

# /etc/ssh/sshd_config

Protocol 2    # defaults to "2,1"

For legacy reasons, the v1 was still available on alwaysdata servers. Since the release of OpenSSH 7.6, the v1 protocol is now removed from the codebase (unavailable even in compilation flags), so stay up to date.

Rely on strong SSH Key with ED25519

You can use several key formats for your SSH Keys. Preferably, you should use the ED25519 algorithm, as it is the best choice nowadays regarding security. Yes, it’s relatively new, but it’s well supported in production right now, and as you manage the server side (or rely on a trusted, dedicated partner), compatibility may not be a big deal.

$ ssh-keygen -a 1000 -t ed25519 -C

It will:
— generate a new key
-a set the number of rounds for the function derivation to 1000, which increases security but also the login time. If a quick login is critical to you, consider decreasing this value (the default is 16; 100 offers robust protection against brute-force attacks)
-t force the format to use the ed25519 algorithm
-C identify your key with your username/device

Network protection

This section contains advanced tips for a sysadmin people that want to highly secure their environment. They may or may not apply, depending on your context and your threat models. In fact, the former sections cover most use-cases of remote access. But you may want to reinforce your security.

Firewall the SSH TCP Port

If you always connect from a known set of dedicated IPs to your servers, so it’s useless (and dangerous) to let any incoming connection try to log in. You should restrict SSH login to known addresses in your firewall rules.

$ iptables -A INPUT -p tcp -s -d --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
$ iptables -A OUTPUT -p tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Here, we allow connections from the IP (you have to use your own, obviously) to the SSH port 22 in TCP only on the local machine address ( using iptables8). Firewalling can be hard, but it’s one of your stronger defenses against attackers.

Some of our machines, which handle parts of alwaysdata architecture not aimed to be exposed publicly are IP filtered like that. If you’ve got a VPS or Dedicated Server offer, you can activate IP Filtering in your admin panel to ensure only your IPs are available to connect to your servers.

Blacklist SSH crackers and brute-force attackers

One of the problems with attackers trying to get access to your machine is that they will request access a huge number of times. Even if they don’t get access, it will overload your SSH server, maybe slow down your system, and perhaps result in DDoS. To prevent that, you can use your firewall to ban the attackers’ IPs. Indeed, you won’t blacklist them manually. Instead, you can rely on tools like fail2ban, DenyHosts, or a custom rule in iptables or ufw to temporarily ban them.

In alwaysdata architecture, SSH servers are protected by a fail2ban filter.

Change SSH port and limit port binding

OpenSSH listens to port 22 by default. It makes it vulnerable to remote machine attacks because attackers will first try this port to target SSH breaches. If you don’t care about compatibility and use machines you’re comfortable with, don’t hesitate to change the default port and the binding addresses to which the servers are listening to to decrease the attack surface.

# /etc/ssh/sshd_config

Port 2002

Port knocking

This is one of the most fun tips.

It’s a method that forces remote users to try to connect to a series of ports to trigger a rule in your firewall that will open the access for its IP to the SSH port. By default, the firewall rejects connections to the SSH port. Using a dedicated tool (knock), you will contact your server to a sequence of ports. If the series is correct, the firewall is updated, and a new rule is added to allow only your client IP to connect through SSH. Simple, smart, efficient9)). It uses the knockd tool.


Here’s our last tips section. Congrats if you read the article so far! It presents some extra setup available to increase a little bit your security. It’s often useless, but it’s a good know-how.

Trust SSH Host Keys

To be trusted by clients, SSH server presents a key that allows you to authenticate it. This key is generated automatically and belongs to the server only. There’s one key per supported algorithm, and they are presented to the client depending on the setup. You can compare this key at your first connection with the one provided in your SSH admin section.

As modern clients support the ED25519 algorithm, your server can serve only this one and disable others to force its use.

# /etc/ssh/sshd_config

#HostKey /etc/ssh/ssh_host_dsa_key
#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key

In alwaysdata setup, we still keep all formats available for compatibility reasons.

Disable root access

Your root user is the super admin user in your *nix system. It is the one allowed to do everything on the machine, even wipe it entirely10). So, it should be used _carefully_. Most of the time, you do _not_ need to be rooted to do what you want, and your system permissions must allow you to execute your commands safely. So, it is a good idea to disable the root user access remotely11). This way, even if a malicious user gets access to the server, that user won’t be able to gain root privileges easily.

# /etc/ssh/sshd_config

PermitRootLogin no

Idle log out timeout

It can be a good idea to limit the time a user can be maintained in your system without any activity, to prevent later malicious use. Decrease the number of idle connection allowed and the inactivity time to make sure nobody can stay longer than needed.

# /etc/ssh/sshd_config

ClientAliveInterval 300
ClientAliveCountMax 0

Chroot users

Users logged in remotely can access the whole system, sandboxed by the system and files permissions. Chroot users in their home directory could be a good idea. Unfortunately, it’s not as trivial as it seems, and it may require relying on tools like rssh. It may be a complicated setup, but if your threat model implies a risk of data leakage, it’s a good idea.

wonder woman shield GIF @Giphy

I hope this long blog post allowed you to understand a little bit more about what matters regarding remote access security. As it relies on cryptographic strengths, it may be hard to manage the whole security of your stack by yourself. Pieces of advice here are what they are: safety tips to increase your security. Please never underestimate how hard it is to keep a whole system safe, and never begrudge working with competent partners who are there to help you. One of our job at alwaysdata is to do it for you. We ensure the cursor is correctly adjusted to give you flexibility without compromising security.

If you want to talk a bit about security, I’ll be at Rennes next Mar. 29, 2018 for the Breizhcamp conference to talk about cryptography and development. See you there!

Notes   [ + ]

1. duckduckgoing/qwanting the missing resources is also a great way to get more information
2. feel free to ask questions in the comments 🙂
3. and you genuinely don’t want to run across the world to connect and repair your systems
4. or when your collaborators quit
5. public and private
6. we’ll soon provide a solution in your admin panel to fill your public key right there
7. store them securely in a password manager, of course
8. you can also take a look at ufw to write your firewalling rules
9. because we can :
10. never, ever, rm -rf / blindly
11. it is also a good idea to disable it locally and always use sudo to execute root commands
]]> 2
SaaS, PaaS, IaaS: what are the differences and what does it matter? Mon, 05 Mar 2018 18:55:49 +0000 We often get legitimate questions from people who don’t know the alwaysdata solution in-depth. Among those, a recurrent one is about our pricing, which is often due to users not understanding our offerings. We’re aware that this happens because what we do isn’t typical for hosts. So1), let’s take a look at what we’re doing here.

question GIF @Giphy

What’s the Cloud (computing)?

In data centers, you will find servers2), network systems, and cables (a lot!) to plug them together. They may also have storage systems like SANs devices and other various kinds of devices. Data centers are not just identical machines ad infinitum.

When you access a remote service like a website, an API, your e-mail, or any other network-based solution, you reach one of those servers. Request go through routers, switches, and cables3). The server handles your request, processes the data, maybe by pulling it from a remote storage system or a distant database, and returns its response.

In order to host that kind of solution4), you have to push your code to one of those servers. Your app will then be available online to your users.

To run your app codebase, you will need a complex stack of tools installed on the server. Reality means that someone needs to manage those tools. And that’s the point about what we’re doing at alwaysdata: we not only rent you a machine, we also manage it for you.

Behind the fog

As we said, in-the-cloud servers are computers. Let’s have a look at what we need to run them:

TL;DR: This part is dense (like British fog). If you don’t want to know every detail, I suggest you go to the next part. To be short: Cloud is infrastructure where data centers host servers, which embed all the stacks you need to power your own application.

A data center

Maybe it’s obvious, maybe it’s not, but we need a place to install, power-plug, and run our servers. The data center physically hosts the machines, and supplies everything they need to run (electricity, controlled temperature, secured access, etc.).


We need to connect the servers to the rest of the world, so a network infrastructure is mandatory. It includes routers (to connect to network providers), switches (to distribute the network to the machines), firewalls and anti-DDOS protections, sensors, etc.


Basically, yeah, we need several machines in order to rent the machines. So physical servers are available, which provide CPU, memory, storage, possibly GPU computing, etc. They can be built by different manufacturers, with different technical features (CPU models or architecture, RAM size, etc.). These machines are the raw computing power of the infrastructure.

Isolation (Virtualization / Containerization5))

This is low-level software executed in the kernel space or similar that runs virtual machines or containers6) and distributes hardware resources (memory, CPU time or power, disk space, networks, etc.). It virtually isolates accounts on the server to give the users trusted access to an almost-whole system, even if it’s not a physical one.

The Operating System (OS)

This is the core software of your machine. It can be GNU/Linux, BSD, Unix, Windows, or some other server-oriented solution. It acts as a link between the hardware (even if virtualized) and the rest of your software stack to give access to the memory, CPU, network, etc. Basically, without it, nothing runs.

The Infrastructure software

Here’s the point.

You need to run your codebase on your server, and this codebase is dependent on a lot of tools and libraries. It involves databases, servers (like an HTTP server), interpreters (like Python, PHP, Node.js, etc.), maybe brokers, caching solutions, indexers, and so on. You will also need a way to get remote access, through SSH or FTP. Maybe you’ll need a versioning system too (probably Git or Mercurial). You will definitely need an e-mail stack, not only for your own e-mail boxes, but also to allow your app and system to send messages when needed (e.g. in case of errors). And you will have to secure it all with firewalls and ban systems to prevent attacks.

It’s a huge and complex system that needs to be maintained, updated, and monitored. It often comes with a dedicated interface to allow you to manage its features and configuration options.

Your hosted application

Congrats 🎉 !

You’ve finally got a server up and running, so you can now deploy your app/website/solution in a production context, to serve it to your users.

knife cut GIF by Scooby-Doo GIF @Giphy

That’s the basics of what you can find in the Cloud. Whatever service you run online, whatever provider you choose, the stack will have to be like this. This means that for any service you want to run, you have to worry about this whole stack, or you have to find some partner to worry about a part of it for you.

Infrastructure, Platform, Software (as-a-service), what’s the difference?

As seen above, the technical stack needed to run your service is huge. Hard to build from scratch, hard to maintain. So the market organized itself around the required strengths to deliver those services. We’re in the age of as-a-service solutions. We can find three kinds of offers which target different audiences of customers: IaaS, PaaS, and SaaS. Fortunately, they can be visualized like this:

IaaS vs PaaS vs SaaS schema

Offers labeled as-a-service have existed for a while. This label is only a gift box7) that packages the old well-known workers: sysadmins, network architects, security experts, DevOps, etc. All of these people work in the basement to ensure you’ve got the desired quality to power your apps.


Infrastructure-as-a-service is a service that offers the basic physical stack. The fee you pay provides access to a machine, which can be physical or virtualized. Your provider takes care of the data center (its own, or a subcontracted one), the network access, the physical servers, network systems, storage units, and maybe the virtualization layer in case of VPS.

  • what you have to do: You have to manage the OS (often provided in a bare version by your server renter), its security, the technical stack, libs, tools, etc. Then you will be able to deploy your app, configure it for production use, and run it.
  • what you need to keep in mind: managing a whole stack is hard. You’ll have to maintain everything on your machines by yourself. It means paying (with money or with time) for sysadmin and network tasks, security, backups, recovery in case of emergency, and migration. It’s a critical point and you are on your own.

Note: It’s possible that one IaaS provider can rely on another IaaS offer. This means one provider can rent a VPS hosted on physical servers owned by another IaaS provider. Depending on your needs and constraints, remember to check how your provider works.

gets russian dolls GIF by Cheezburger @Giphy


Platform-as-a-service gives you the infrastructure as seen above, plus it maintains the whole system stack: OS, interpreters, libs, databases, security, etc. It often provides a way to manage your environment without a mess, such as a CLI, config files in your project, a dedicated versioned repository, or a web panel for GUI use.

  • what you have to do: You just have to deploy and configure your app to run it.
  • what you need to keep in mind: Your provider will handle all costs of sysadmin, network, security, backups, etc. It remains in your domain to deploy your app; your provider probably won’t help you with that. Also, you’re responsible for the security of your app itself. To keep it simple: you pay the DevOps costs.


Software-as-a-service is a more advanced solution where you can use the software out-of-the-box as a customer. You won’t need to deploy your own instance. This way, you can subscribe to the provider offer to get a full access to the solution as a user. The provider can use its own infrastructure/platform, or it can rely on a subcontractor offer for hosting. This is the model used by many tech startups, as they develop softwares they sell in SaaS mode and rely on hosters for infrastructure.

  • what you have to do: For you, as an end-user, it’s totally transparent.
  • what you need to keep in mind: You can’t customize the server outside of the application settings. It a service, remotely available, that acts like an app on your phone. You never have access to the full server. If you have many apps in SaaS mode, you will have huge costs (by user or by service), so using a PaaS offer to host all your apps is probably the smarter move.

So, let’s do another sketch (I like to do sketches) to summarize those concepts. In each cloud, you’ll find what roles you pay for in each offer, which also means what you can’t manage by yourself:

Domains of responsibility for XaaS offers schema

Who are you, alwaysdata?

We are a PaaS provider. We are owner of our own physical stack. We maintain for you the entire system you need in order to provide your solution to your customers. This is true for our VPS and Dedicated servers offers, but it’s also true for our Shared hosting offer. In our model, VPS and Dedicated Server use the same platform as Shared hosting. The difference is that you’re completely alone in your server instance in the former two: you’re the only one that consumes resources.

Since we started alwaysdata, we have chosen to not just be a IaaS provider. Twelve years ago, we weren’t able to find a solution that gave us the flexibility we needed to host apps and services in a managed environment. So, we made our own and decided to release it for the rest of the world. And that’s the basic reason why we can’t compete with other IaaS solutions: they simply don’t offer the same level of services, nor the same level of quality.

We choose to offer a full environment. alwaysdata provides support for all available interpreters in the market, the ability to run whatever programs you want in your userspace, the ability to run services in the background, many SQL and NoSQL databases, full SSH remote access, and much more! Even in the Shared Hosting environment, our features are far beyond those of our competitors, who often only support PHP behind a single Apache instance, with a MySQL database and no SSH access.

Performance and security are also in our DNA. We never automatically rely on the virtualization layer to do isolation, as can be seen elsewhere. Instead, we use the kernel and OS systems designed to isolate accounts. This practice allows us to give you an increased level of performance with no compromise on security. Of course we use virtualization in some parts of our platform, but we use it only when it makes sense.

i see eye roll GIF by Warner Archive @Giphy

I hope this post helps you better understand the key differences between the offers of hosting providers. You should have a try at what a modern PaaS solution should always be like. Giving you the comfiest position to run your apps has been our goal since we started!

Notes   [ + ]

1. hoping things will be more understandable with the next release of our website
2. which are computers generally with high computational power
3. never underestimate the cables!
4. I will talk about the server-less paradigm in a future blog post, where we will learn that a server-less solution generally implies a server anyway ¯\_(ツ)_/¯
5. I also have a blog post in my todo to talk about virtualization vs. containerization vs. isolation
6. purists will probably already put my name on their need-to-kill list when they read that, I hope you’ll forgive me for this shortcut in my explanation: I try to stay concise, and the debate on virtualization vs. containers in hosting is out of the scope of this post
7. but gift boxes are cool, they’re a promise of joy
]]> 3
4 years later: being independant, feedback Tue, 20 Feb 2018 09:37:03 +0000 In 2013, we decided to stop contracting out our physical infrastructure. With the benefit of hindsight, we wanted to share our experience with you.

Experience GIF @Giphy

First and foremost, high quality, all-inclusive/user-friendly

If you are an alwaysdata customer, then you are already familiar with the cockpit! If you aren’t, then take five minutes to try our free 100MB shared hosting plan and explore our services and interface.

We see our management interface as a cockpit. We designed it so that you are able to administer everything in one place: domains, websites, databases, distant access, configurations — everything is there. You can also contact us directly from the cockpit via our ticketing system. And it has a lot more features!

alwaysdata's cockpit - status view - screenshot [en]

In 2006, no hosting offer fit our needs. This is why we built one and developed our own tools internally. Ten years later, we regret nothing. Our customers have access to the most flexible and user-friendly service on the market. These are two of our main strengths.

We host everything, but the core of our business is offering managed services. When you open an account at alwaysdata, you have access to a wide range of services: interpreters, databases, brokers, e-mails, backups, etc. And you have nothing to worry about. Everything is managed, safe, and kept up-to-date.

Nevertheless, one can only do so much work. Developing a software structure and getting a physical infrastructure that fit our ambitions was too great of a challenge at the time. We had to choose, and we chose software, leaving the physical infrastructure management to trusted third parties.

Or so we thought. For the first few years, we relied on the OVH infrastructure, using extra servers at Amazon EC2, Hertzner, and Online. This architecture allowed us to focus on developing our project: management interface, monitoring, support, alproxy, account management, and more.

Then, four years ago, we became fed up by the poor quality of our subcontractors. We had come a long way, but the subcontractors had seriously impacted our customers’ experiences. It was then that we decided to become our own hardware and network operator.

sounds like a plan ok GIF @Giphy

No compromises

We stopped subcontracting because they provided you with substandard experiences. Our physical infrastructure service had to fit our ambition to provide you with the best services.

The first servers

It started with a few machines. We chose, configured, and installed them, but, of course, this wasn’t easy. We had to learn the hosting business in depth. But we began to see results: no global breakdown, ever1).

During this process, we’ve stood by our principles:

  • Neutrality, always. From the data centers handled by Equinix, a neutral operator, to the network operators IELO-LIAZO, Nerim, Cogent, and Interoute, we guarantee the independence of your data, traffic, and operations. We are a member of RIPE, which ensures that you are independent as an operator on our IP range.
  • Quality, always. We use high-end servers that are equipped with data centers that were labeled SSDs in production. They all have dual-power blocks on separate electrical circuits and dual-network interfaces that are connected to switches from different brands. Our years with subcontracting have taught us that the details matter2). This was the right choice! No major breakdowns during all these years is a great achievement.

Redundancy: at any cost?

We often are asked about data redundancy at alwaysdata. Back in the day, we duplicated data on two distinct infrastructures. However, this method was extremely costly to maintain, and the massive unavailability of the first point could not justify the associated costs. We talked about this decision[fr] back in 2012.

Our goal is not a zero Recovery Time Objective (RTO). We tried real-time redundancy, and it actually decreased our quality of service by introducing bugs and making the whole system unstable. Our availability rate is excellent, with only a negligible incidence of breakdown. Such a complex and risky architecture is not justifiable given our successes.

What matters to us is the Recovery Point Objective (RPO). We ensure that our backups are easy to recover and are not corrupted. Local data are mirrored (RAID1) on every server. In case of emergency, we have the ability move data from an unavailable server to a spare one in the same bay, guaranteeing a functioning system in fewer than 30 minutes. This solution gives us the flexibility to recover the original server in a calm and unhurried manner.

This is not real-time redundancy, but you will have the insurance of continuity of service 3) in case of any incident.


Your data continuity matters. We ensure 30 last-day backups in a second data center operated by Online and on our own servers. Ensuring quality is our main goal. Our physical infrastructure has resulted in fewer corrupted disks: only two during the last four years4) compared to three or four per month with our last subcontractor.

Be free, for the best

Our freedom allows us to provide you with more choices for our server configurations. We’re no longer limited by our subcontractors. We choose only the material and suppliers that we trust. We manage our own supply chain, which allows us to prevent delays. Within hours, we can deploy servers on production in standard configurations.

As an independent operator, we are freer than we were with subcontractors. With our own IP ranges, we no longer suffer from the blacklisting that impacted our subcontractors. In addition, we are able to do advanced filtering on access on our side.

We finally have better control: deploying IPv6 wouldn’t have been possible if we didn’t have our own infrastructure. Now, we are ready to offer something unique: a hosting platform with software solutions that perfectly fit your needs.

2018, an all-inclusive offer

In 2013, alwaysdata started with only three devices. Now, we have nearly fifty times that number of servers and devices to power your apps and websites. That migration took some time, but we learned a lot about our skills and your needs. With our final servers finally running on our side, we’re free.

We’re free to provide you with a solution that you’re happy to work with. We often say that alwaysdata is conceived by devs, for devs. Now more than ever, that’s our main incentive. We will continue to offer you with more and more features on the hardware and software sides. This adventure wouldn’t have happened without you. Thank you for your confidence in our ability to change the hosting world.

charlie brown hug GIF @Giphy

We at alwaysdata will never run out of ideas about new features. Let’s play in the comments: which technologies — unlikely or wacky, hardware or software — can you imagine coming in the alwaysdata platform?

Notes   [ + ]

1. unless you count this Sunday at 1 a.m., which lasted for fewer than five minutes
2. a breakdown due to something as basic as overheating was simply not an option
3. if you need more, stay tuned! A gold offer will be introduced soon, featuring all the high availability options you could possibly need and a physical redundancy in another data center. You will have practically a 99.95% availability rate
4. and only in the last month with two new disks, which were probably because they come from a bad batch
]]> 0