A Small Rust API with Actix

May 10, 2018 · 6 min read

I had the need for a very small API for this website. So small, in fact, that only one endpoint was required. I've been doing a lot of development in Rust lately, so naturally Rust seemed like a good candidate to build this API in. I also wanted to try out a newer Rust web framework called Actix web. It claims to be a "small, pragmatic, and extremely fast rust web framework", which sounded perfect for what I needed.

Getting started with Actix web is pretty straightforward. First create a new Rust project

cargo new my_api --bin

Cargo (the Rust package manager) is installed along with the popular Rust installer, Rustup. Adding Actix web server to your project can be done by first adding dependencies to Cargo.toml:

[dependencies]
actix = "0.5"
actix-web = "0.6"

and then starting a server in main:

extern crate actix_web;
use actix_web::{server, App, HttpRequest};

fn index(req: HttpRequest) -> &'static str {
    "Hello world!"
}

fn main() {
    server::new(
        || App::new()
            .resource("/", |r| r.f(index)))
        .bind("127.0.0.1:8088").expect("Can not bind to 127.0.0.1:8088")
        .run();
}

The Actix web quickstart guide gives a pretty good overview of getting started with Actix web.

The functionality I wanted for this particular API was to return some stats about my running for the year to display on this website. In order to get that data, I needed to make a couple of GET requests to Running Ahead, parse that data and return a JSON structure showing the total mileage run for the year and mileage from my 5 most recent runs.

{
  "year": "422",
  "latest": [
    "6.9",
    "7.78",
    "6.98",
    "7.71",
    "6.96"
  ]
}

The first thing to do was to figure out how to do the GET requests. Actix (the underlying Actor framework for Actix web) has a ClientRequest struct that allows you to make standard HTTP requests. I used ClientRequest to fetch a page from Running Ahead and return a Boxed Future which parses the resulting content into Vec of String.

/// get mileage for 5 latest runs
fn get_latest() -> Box<Future<Item=Vec<String>, Error=Error>> {
    Box::new(
        client::ClientRequest::get("https://www.runningahead.com/scripts/<my_user_id>/latest")
            .finish().unwrap()
            .send()
            .map_err(Error::from)
            .and_then(
                |resp| resp.body()
                    .from_err()
                    .and_then(|body| {
                        let re = Regex::new(">([0-9]*?.[0-9]*?|[0-9]*?) mi").unwrap();
                        fut_ok(re.captures_iter(str::from_utf8(&body).unwrap())
                            .into_iter()
                            .map(|item| {
                                item[1].to_string()
                            })
                            .collect())
                    })
            ),
    )
}

Note that I'm using str::from_utf8 to convert the body that is returned into a String that can be matched in a regular expression.

The request to get the total mileage for the year is very similar.

/// Get total miles for the year
fn get_year() -> Box<Future<Item=String, Error=Error>> {
    Box::new(
        client::ClientRequest::get("https://www.runningahead.com/scripts/<my_user_id>/last")
            .finish().unwrap()
            .send()
            .map_err(Error::from)
            .and_then(
                |resp| resp.body()
                    .from_err()
                    .and_then(|body| {
                        let re = Regex::new("(?s)<th>Year:</th><td>(.*?) mi</td>").unwrap();
                        let mat = re.captures(str::from_utf8(&body).unwrap()).unwrap();
                        fut_ok(mat[1].to_string())
                    })
            ),
    )
}

Remember that these functions both return Futures as we want to make the requests simultaneously and combine the results when they have both returned. In order to do this, the calls can be chained together and combined in an endpoint like so:

fn running(req: HttpRequest) -> Box<Future<Item=HttpResponse, Error=Error>> {
    get_year()
        .and_then(|miles_year| {
            get_latest().and_then(|miles_latest| {
                Ok(HttpResponse::Ok()
                    .content_type("application/json")
                    .body(serde_json::to_string(&MilesData {
                        year: miles_year,
                        latest: miles_latest,
                    }).unwrap()).into())
            })
        }).responder()
}

All of this got me most of the way to where I needed to be. However, since the calls to Running Ahead are https, SSL needs to be enabled for the Actix dependency. This can be done by adding the alpn feature to Actix:

[dependencies]
actix-web = { version="0.6", features=["alpn"] }

Once I had alpn enabled, all worked well on my local (macOS) machine. However, when I went to deploy to a Linux server with an nginx process to provide SSL, I was met with a strange error message:

Error occured during request handling: Failed to connect to host: OpenSSL error: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1245:

Very strange. After much Googleing, I found a reference that suggested trying openssl-probe. This tool searches out locations of SSL certificates on the system and was exactly what I needed. Using openssl-probe requires adding the dependency to Cargo.toml

[dependencies]
openssl-probe = "0.1.2"

and adding adding this to the src/main.rs

extern crate openssl_probe;

fn main() {
    openssl_probe::init_ssl_cert_env_vars();
    //... your code
}

Here is the final Cargo.toml

[package]
name = "stevezeidner-api"
version = "0.1.0"
authors = ["Steve Zeidner <steve@stevezeidner.com>"]

[dependencies]
futures = "0.1"
env_logger = "0.5"
actix = "0.5"
actix-web = { version="0.5", features=["alpn"] }
openssl-probe = "0.1.2"

serde = "1.0"
serde_json = "1.0"
serde_derive = "1.0"
json = "*"
regex = "0.2"

and source for src/main.rs

#![allow(unused_variables)]
#![cfg_attr(feature = "cargo-clippy", allow(needless_pass_by_value))]

extern crate actix;
extern crate actix_web;
extern crate env_logger;
extern crate futures;
extern crate json;
extern crate openssl_probe;
extern crate regex;
#[macro_use]
extern crate serde_derive;
extern crate serde_json;

use actix_web::{App, AsyncResponder, error, Error, fs,
                HttpMessage, HttpRequest, HttpResponse, pred, Result, server};
use actix_web::{client, middleware};
use actix_web::http::{Method, StatusCode};
use futures::{Future, future::ok as fut_ok};
use regex::Regex;
use std::{env, io};
use std::str;

#[derive(Debug, Deserialize, Serialize)]
struct MilesData {
    year: String,
    latest: Vec<String>,
}

/// Get total miles for the year
fn get_year() -> Box<Future<Item=String, Error=Error>> {
    Box::new(
        client::ClientRequest::get("https://www.runningahead.com/scripts/<my_user_id>/last")
            .finish().unwrap()
            .send()
            .map_err(Error::from)
            .and_then(
                |resp| resp.body()
                    .from_err()
                    .and_then(|body| {
                        let re = Regex::new("(?s)<th>Year:</th><td>(.*?) mi</td>").unwrap();
                        let mat = re.captures(str::from_utf8(&body).unwrap()).unwrap();
                        fut_ok(mat[1].to_string())
                    })
            ),
    )
}

/// get mileage for 5 latest runs
fn get_latest() -> Box<Future<Item=Vec<String>, Error=Error>> {
    Box::new(
        client::ClientRequest::get("https://www.runningahead.com/scripts/<my_user_id>/latest")
            .finish().unwrap()
            .send()
            .map_err(Error::from)
            .and_then(
                |resp| resp.body()
                    .from_err()
                    .and_then(|body| {
                        let re = Regex::new(">([0-9]*?.[0-9]*?|[0-9]*?) mi").unwrap();
                        fut_ok(re.captures_iter(str::from_utf8(&body).unwrap())
                            .into_iter()
                            .map(|item| {
                                item[1].to_string()
                            })
                            .collect())
                    })
            ),
    )
}

fn running(req: HttpRequest) -> Box<Future<Item=HttpResponse, Error=Error>> {
    get_year()
        .and_then(|miles_year| {
            get_latest().and_then(|miles_latest| {
                Ok(HttpResponse::Ok()
                    .content_type("application/json")
                    .body(serde_json::to_string(&MilesData {
                        year: miles_year,
                        latest: miles_latest,
                    }).unwrap()).into())
            })
        }).responder()
}

/// 404 handler
fn p404(req: HttpRequest) -> Result<fs::NamedFile> {
    Ok(fs::NamedFile::open("static/404.html")?
        .set_status_code(StatusCode::NOT_FOUND))
}

fn main() {
    openssl_probe::init_ssl_cert_env_vars();
    env::set_var("RUST_LOG", "actix_web=debug");
    env::set_var("RUST_BACKTRACE", "1");
    env_logger::init();
    let sys = actix::System::new("stevezeidner-api");

    let addr = server::new(
        || App::new()
            // enable logger
            .middleware(middleware::Logger::default())
            .middleware(middleware::DefaultHeaders::new()
                    .header("Access-Control-Allow-Origin", "*"))
            .resource("/running", |r| r.method(Method::GET).a(running))
            .resource("/error", |r| r.f(|req| {
                error::InternalError::new(
                    io::Error::new(io::ErrorKind::Other, "test"), StatusCode::INTERNAL_SERVER_ERROR)
            }))
            // default
            .default_resource(|r| {
                // 404 for GET request
                r.method(Method::GET).f(p404);

                // all requests that are not `GET`
                r.route().filter(pred::Not(pred::Get())).f(
                    |req| HttpResponse::MethodNotAllowed());
            }))

        .bind("127.0.0.1:8888").expect("Can not bind to 127.0.0.1:8888")
        .shutdown_timeout(0)    // <- Set shutdown timeout to 0 seconds (default 60s)
        .start();

    println!("Starting http server: 127.0.0.1:8888");
    let _ = sys.run();
}

The Well-Rounded Developer

April 7, 2013 · 5 min read

Should the role of a front-end developer be limited to only client-side technologies? I have asked myself this question a lot lately. I come from a background where, as a web developer, I typically work across the following development stack to design and build a product:

  • Design. Graphic design, page layout to how a page flows responsively across devices
  • Client-side code. HTML, CSS and JavaScript
  • Data. Flat data files, RESTful web services, relational DBs, noSQL DBs, ...
  • Server-side code - PHP, .NET, Ruby, Lisp, Node.js, Python, ...

Recently, I have moved into a position where I am focused on fewer core languages and technologies. As a result, I find myself thinking about the value of becoming an expert in one area of the stack. Is there more value in being an expert than being a well-rounded developer? The upside to becoming an expert in one subject is that there is more time to devote to exploring every nook and cranny of that subject's subculture. After all, web development is an art form. We are artists who should know our medium and our style. However, it is this very focus that often makes us lose sight of the broader picture. Programming is not about a particular technology or where it falls into the stack. Fundamentally it is about solving problems. An understanding of when to execute code on the server and at what point it's best handled it in the browser allows a developer to come up with the most efficient solution to the problem. Jiro, in Jiro dreams of Sushi, states that "Once you decide on your occupation, you must immerse yourself in your work. You have to fall in love with your work. Never complain about your job. You must dedicate your life to mastering your skill. That’s the secret of success and is the key to being regarded honorably." Jiro's skill is sushi. He dedicated his life to coming up with and mastering the techniques of making the best sushi. In order to accomplish this goal, Jiro had to master the art of finding the right seafood vendors, picking the best fish (tuna, octopus, shrimp), preparing the fish, making the rice and creating an experience for his customers. If there was a problem with any part of the cycle, the sushi would be sub-par. So it is with development. A problem, often caused by lack of knowledge, in any layer of the stack can create fragile dependencies, inefficient results or worse...buggy code. I have heard it said that new tech moves too quickly for well-rounded developers to keep up. This is true in the sense that no one developer has the time to become THE expert in every language, framework and platform that exists today. For a new developer, it can be overwhelming to look at the options that exist and wonder where to begin. However, if we take a step back, we are able to see that this pace of innovation is actually the fuel that drives the well-rounded developer. I say this for two reasons:

  1. The fundamentals of programming have not changed.
  2. The new frameworks and tools allow us to stand on the shoulders of giants.

Concepts such as object-oriented programming, data models and design fundamentals stay relatively stable over time. There are many different implementations of these principals and the principals themselves are expanded upon and refined over time, but much of the knowledge is transferable between languages and platforms. Differences are found mostly in syntax and philosophies. While syntax varies greatly among languages, the ones that tend to gain the most support are derivatives of earlier high-level languages such as Fortran and C. So, a lot of syntax knowledge is transferable as well. While the creator's philosophy of a particular language or framework can vary, there are a finite number of general philosophies in existence and code design patterns often transcend philosophies. As Solomon said: "What has been will be again, what has been done will be done again; there is nothing new under the sun." Programmers that have been working at their craft for a while often say that development is much easier now than it ever was. They are getting at the idea that we do not have to mess around with as much low level stuff as we used to. 30 years ago, programmers had to write device level drivers just to connect to a database or manage a network connection. Given the same amount of time today, we are able to create more feature-rich, complex applications because of the work that has been done by those that have come before us. Frameworks in any context (server-side and client-end) continue to build on this infrastructure and will only speed the pace at which we can develop amazing products. All of this is good in a general sense, but the real time-consuming part of becoming a well-rounded developer is spent in honing the details of one's craft. It is difficult to decide which framework is best suited for a project and even more tedious to learn all the exceptions and caveats that come with a particular language or framework. Because of this, community is a vital component to a well-rounded developer's workflow. Pick ecosystems that have good community support. Find the best framework to use for a project from the discoveries that others have made. Share what you learn when you develop for a platform with others. And above all, build new things.

Twitter API v1.1 Front-end Access

February 16, 2013 · 4 min read

Twitter is retiring their v1 API in March of 2013. This means all future API integrations must use the v1.1 API which requires authentication on any request (read or write). This is pretty straightforward using a PHP OAuth library or any OAuth server side implementation, but what if you wanted to implement something client-side? This can be accomplished by using the Yahoo! Query Language (YQL) to do the heavy lifting for us. A Twitter app is necessary to do any OAuth authentication. Go to https://dev.twitter.com/apps and create a new Twitter application. Once your application is created, click the "Create my access token" button to link this with your Twitter account. You will then have a Consumer Key, Consumer Secret, Access Token and Access Token Secret for this application that is associated with your Twitter account (see an example in the screenshot below). Make a note of these values. keys Next, create a text file that contains the keys from your application (leave the env line as it is).

env 'http://datatables.org/alltables.env';
set oauth_consumer_key = "kSCAs8K62d60v2RjT8Q" on twitter;
set oauth_consumer_secret = "oq1WlA0itYPoKqkg1VnLdPcrmq5qugXh0aYV62oIA" on twitter;
set oauth_token = "14409872-ygtLVnhRr8ABSioMu28DHD5iJ6Yj8U3CEozxlTwsD" on twitter;
set oauth_token_secret = "bqY5TXGSGwy72TmLgPgYz1jpW1riExHYNJVcqPIFCUE" on twitter;

Upload the text file to an accessible URL and go to the YQL console. Run the following query in the console replacing NAME and URL with whatever name you want to reference the stored data by and the url to the text file you just uploaded.

INSERT INTO yql.storage.admin (name,url)
VALUES ("NAME","URL")

The result of this query will contain an node (ex. store://my.host.com/name). **Make note of the value of this node.**Your application's OAuth keys are now stored in a database table that can be accessed from another YQL query. This is important because YQL also has community tables that allow for Twitter API requests. The following JavaScript (yes, some jQuery is used for simplifying the AJAX call) requests recent tweets from the @resource Twitter account and uses the stored keys for authentication. Just change the env value (line 14) to the the value of the node that you took note of earlier.

/* Set up the YQL query (http://developer.yahoo.com/yql/) */
var query = 'SELECT * FROM twitter.status.timeline.user '+
            'WHERE id="@resource" ';

/* Options for the YQL request
    * q = your query
    * format = json or xml
    * env = environment to pull stored data from
    */
var dataString = {
    q: query,
    diagnostics: true,
    format: 'json',
    env: 'store://my.host.com/name'
};

/* make the AJAX request and output to the screen */
$(document).ready(function() {
    $.ajax({
        url: 'https://query.yahooapis.com/v1/public/yql',
        data: dataString,
        success: function(data) {
            $('#returnData').html(JSON.stringify(data, undefined, 2));
        }
    });
});

That's pretty much all there is to making client-side Twitter API read requests using YQL to do the rest of the heavy lifting. A couple of things to keep in mind:

  1. The security on this is not great (it's more security through obscurity). Anyone can use the env link to execute read requests, but they don't directly have access to your keys. It's always better to implement this API server side if you have access to do so.
  2. Both APIs rate limit the endpoint calls. YQL has a 2,000 calls per hour per IP limit to their public API. Here is an explanation of Twitter's rate limits. Caching should be implemented to avoid hitting these limits.

Here is the Codepen Link for a working example. This concept was adapted from Derek Gathright's blog post.

SASS Ruby Extension to Check if File Exists

February 6, 2013 · 2 min read

CSS is executed client-side and so it cannot check for the existence of an image, font or other asset file being referenced. However, since Sass is written in Ruby, it allows for server-side calls by extending Sass via custom functions. Here is a custom function to check for the existence of a file:

module Sass::Script::Functions 
     # Does the supplied image exist? 
     def file_exists(image_file) 
          path = image_file.value 
          Sass::Script::Bool.new(File.exists?(path)) 
     end 
end

If this code is placed in a file named functions.rb, the Sass watch command would be:

sass --watch style.scss:style.css --require functions.rb

So, why would you ever need to check for the existence of a file at Sass compile time? One place I found it useful (I'm sure there are other uses) was when eliminating duplication of internationalized CTA (call-to-action) images. Canadian (or British) English is similar to U.S. English in many ways, but there are some words that are a different between the two (favorite vs favourite for example). The following Sass mixin selects a CTA image from a folder based on the lang attribute set on the page. In the case of Canadian English, it will first check to see if the image exists in the en-ca folder. If not, it will fall back to using the image from the en-us folder. This avoids duplication of the English images that are the same in both Canadian and U.S. English. The benefit of this is:

  1. Fewer total assets, so they are easier to maintain
  2. The total asset payload is smaller (especially important if used in a mobile app)
@mixin localeImage($image: null) { 
     [lang="en-us"] & { 
          background-image: url('assets/img/en-us/#{$image}'); 
     } 
     [lang="en-ca"] & { 
          $file: 'assets/img/en-ca/#{$image}'; 
          @if file_exists($file) { 
               background-image: url('#{$file}'); 
          } else { 
               background-image: url('assets/img/en-us/#{$image}'); 
          } 
     } 
     [lang="fr-ca"] & { 
          background-image: url('assets/img/fr-ca/#{$image}'); 
     } 
}

Front-End Web Development

January 16, 2013 · 5 min read

The state of front-end web development has changed significantly over the last couple of years. Perhaps it was the introduction of responsive design in early 2010, the release of a retina iPad and, shortly thereafter, the retina Macbook pro in mid-2012, or Adobe "killing" mobile flash in late 2011 that prompted the change. A few years ago, a front-end developer title often defined the term "developer" rather loosely. HTML and CSS were often the only required languages for this role and are really considered to be content markup languages rather than true development languages (however you might choose to define that). A few years before that, we were building websites purely with HTML and images, using tables to implement our layouts. A lot has changed. If you are interested in learning more web history, check out Eric Meyer's excellent podcast, The Web Behind. Regardless of what actually prompted this recent shift, I believe there are (at least) two major reasons that the front-end developer role will continue to be significant for quite some time:

  1. CSS and image support has gotten much more powerful and complex.
  2. HTML/CSS/JavaScript can be used to build native cross-platform mobile apps (Phonegap, Appcelerator Titanium).

CSS3, Retina Images, and Bandwidth, Oh My!

The CSS3 spec added lots of very powerful new features. Things like rounded corners, shadows and gradients can be achieved easily in browsers that support CSS3. There are also more advanced features like CSS transforms and animations which require a bit more knowledge about things like keyframes and perspective. Specific browser prefixes must be applied for many of these new features in order to ensure backwards compatibility as the spec continues to evolve. Of course, the downside to all of this is that CSS code has become hard to maintain. Fortunately, CSS pre-processors have been created to help make CSS code more maintainable and object-oriented (ish). Sass and LESS add features like functions, variables and mixins as well as code libraries (Sass has Compass). Sass does seem to be gaining in popularity for a number of reasons, but the point is that this is one more tool for a front-end developer to learn. A hot topic in the front-end developer community at the moment is debating how to support images across different devices, resolutions, pixel densities and varying connection speeds. The increasing consumer use of retina-density displays and better internet access from smartphones requires developers to think about how to potentially support multiple sizes of higher quality images at lower file sizes. There are many solutions for supporting higher resolution images on retina displays. You could detect a retina display and serve up double resolution (@2x) images for those displays. This requires two versions of every image to be created which isn't so bad. The problem is that a device like an iPhone has a retina display as does something like a Macbook Pro. The Macbook Pro could be connected via Ethernet to a fast connection and the iPhone may be on 4G now, but in seconds, it could be down to an edge connection as the user moves away from the nearest cell tower. This article does a good job of explaining the pitfalls of trying to measure or predict bandwidth at the CSS level to serve up different image sizes. So, there is some added complexity that comes with more mobile devices and retina displays. In addition, any image, whether it's targeted at a retina screen or a smartphone, should have as small a file size as possible. Tools like:

can do an amazing job of reducing file size while maintaining quality. Even with these tools, a great deal of manual effort is still required to make sure that the images maintain quality once compressed.

Mobile Apps

Because the three main "languages" in use on the presentation layer of the web (CSS, HTML, JavaScript) are so widely known, developers have come up with solutions for building mobile apps on all major platforms using these languages of the web.  A mobile app can be built completely in JavaScript (and HTML and CSS) and compiled with either the open source PhoneGap framework or Appcelerator Titanium for iOS, Android and a number of other mobile OS's. This gives any front-end web developer the tools to build mobile apps without learning a new language. It sounds like a dream come true, but in reality, most apps that are conceived are complex in nature and require a framework (like Backbone.js) to organize the JavaScript code. With the maturity of these mobile app frameworks that use commonly-known web languages, the front-end developer role has expanded to be much larger than it once was.

Conclusion

These are very exciting times indeed for front-end devs. So much new technology is being released that the door is wide open for innovation. I'm just excited that we are no longer using table layouts.