Steve Zeidner
Software Developer

Cloud Backups for a Synology

January 5, 2019 · 4 min read

I have a Synology DS415+ that performs wonderfully as a NAS. It's used mostly as a place for shared content, Time Machine backups and redundancy of video data. It's great to have 16TB of available network storage, but there are limited options for a cost-effective cloud backup of multiple terabytes of data.

A couple of years ago, there was one decent option for backing up Synology data to the cloud. Crashplan had an unlimited personal plan for less than $5/month and their client could run directly on the Synology. The downside to Crashplan was that every time the Synology OS would be updated there was risk of the client breaking and needing to be re-installed and/or re-configured. For the price and the self-contained solution, this was an acceptable tradeoff, at least for me. In 2017, Crashplan did away with their personal plan, so this route is no longer an option.

Over the last year, I've tried some other options (like hubiC) for larger inexpensive storage. They offer 10TB for €50/year. I didn't find their control panel or service to be consistently reliable, so I've been on the search for other solutions and have two that work decently well. They aren't without some tradeoffs, but if you are looking for inexpensive, unlimited cloud backup, these are two decent options.

Backblaze over iSCSI

Backblaze offers an unlimited personal backup plan for $5/month that will back up any local drives from a computer. Their client only runs on a Mac or PC, but is written natively for the platform (unlike Crashplan which had a memory-intensive Java client). So, this option requires a dedicated computer to run the Backblaze client and the Synology has to be mounted as a local drive as Backblaze does not support network shares. Remember, I said there were tradeoffs.

A Mac Mini is perfect for this task. It's small and quiet and can easily be run headless. To set up an iSCSI target on the Synology, open up the iSCSI Manager (from the main menu), click on Target and create a new target and LUN. The wizard will walk you through the creation of both and will have you create a LUN of a size of your choosing. Note that this will be a new drive on your Mac or PC that will be formatted by the operating system. Any data you wish to have backed up will need to be moved or copied to this drive. Again, tradeoffs.

macOS does not come with an iSCSI initiator which is required to mount the iSCSI target as a local drive. The options out there are pretty bleak. Most people recommend globalSAN ($89) or Xtend SAN($249), both of which are pricey options (remember, we are trying to keep this solution cost-effective). A third option I came across was KernSafe, which has a free pricing tier for personal use. This seemed promising from a cost perspective, but I could not get it to work (at least on Mojave). After trying and failing with KernSafe, I decided to give the open source iSCSIInitiator a try. Before installing, you will need to disable unsigned kernel extensions protection by running csrutil disable. I also had a lot of trouble with the downloadable binary on Mojave. I would get segfaults nearly every time I ran the tool (about 1 in 10 tries would give me a successful run). So, you do need to build the tool and install from source which is not a difficult task as the shell scripts work very well (just make sure you have Xcode and tooling installed). This is a command line only tool, so once it's installed, here is a quick guide to getting it up and running:

> iscsictl add target <target_IQN>,<synology_IP>:3260
> sudo iscsictl modify target-config <target_IQN> -auto-login enable
> iscsictl login <target_IQN>

After running the login command, you should see a prompt that an external drive has been connected. Open DiskManager and format the drive to the file system of your choosing. Add the data to this drive that you wish to back up, install Backblaze, and start backing up to the cloud.

Google Drive Sync

A second option is to sync all your data to Google Drive using Synology's Cloud Sync app. You can get unlimited data on Google Drive for $10/month if you sign up for G Suite business plan. The admin user on this account will get unlimited storage on Google Drive which is perfect for syncing a large set of data from a Synology. This has none of the tradeoffs of Backblaze over iSCSI; the need for a dedicated computer, a complicated setup, and a separate drive that can't be easily shared on the network like other NAS shares. However, the big tradeoff of Google Cloud sync is that it is only syncing your data (you can choose if this is bi-directional or one-way sync) and not creating backup sets like Backblaze would do. The Cloud Sync app will also hog all of the upload bandwidth where Backblaze will throttle the upload. To alleviate this, I set the sync schedule of Cloud Sync to only run at night. This method also costs $5 more per month, but does not require the upfront cost of a dedicated computer or the cost of running and maintaining one.

Because each of these options has their tradeoffs, for now, I run both of them to get the best of both worlds. I was already paying for a G Suite business plan and already had a Mac Mini, so there was little additional cost. It also has the benefit of multiple backups being created which is far better than just one backup.

A Small Rust API with Actix

May 10, 2018 · 5 min read

I had the need for a very small API for this website. So small, in fact, that only one endpoint was required. I've been doing a lot of development in Rust lately, so naturally Rust seemed like a good candidate to build this API in. I also wanted to try out a newer Rust web framework called Actix web. It claims to be a "small, pragmatic, and extremely fast rust web framework", which sounded perfect for what I needed.

Getting started with Actix web is pretty straightforward. First create a new Rust project

cargo new my_api --bin

Cargo (the Rust package manager) is installed along with the popular Rust installer, Rustup. Adding Actix web server to your project can be done by first adding dependencies to Cargo.toml:

[dependencies]
actix = "0.5"
actix-web = "0.6"

and then starting a server in main:

extern crate actix_web;
use actix_web::{server, App, HttpRequest};

fn index(req: HttpRequest) -> &'static str {
    "Hello world!"
}

fn main() {
    server::new(
        || App::new()
            .resource("/", |r| r.f(index)))
        .bind("127.0.0.1:8088").expect("Can not bind to 127.0.0.1:8088")
        .run();
}

The Actix web quickstart guide gives a pretty good overview of getting started with Actix web.

The functionality I wanted for this particular API was to return some stats about my running for the year to display on this website. In order to get that data, I needed to make a couple of GET requests to Running Ahead, parse that data and return a JSON structure showing the total mileage run for the year and mileage from my 5 most recent runs.

{
  "year": "422",
  "latest": [
    "6.9",
    "7.78",
    "6.98",
    "7.71",
    "6.96"
  ]
}

The first thing to do was to figure out how to do the GET requests. Actix (the underlying Actor framework for Actix web) has a ClientRequest struct that allows you to make standard HTTP requests. I used ClientRequest to fetch a page from Running Ahead and return a Boxed Future which parses the resulting content into Vec of String.

/// get mileage for 5 latest runs
fn get_latest() -> Box<Future<Item=Vec<String>, Error=Error>> {
    Box::new(
        client::ClientRequest::get("https://www.runningahead.com/scripts/<my_user_id>/latest")
            .finish().unwrap()
            .send()
            .map_err(Error::from)
            .and_then(
                |resp| resp.body()
                    .from_err()
                    .and_then(|body| {
                        let re = Regex::new(">([0-9]*?.[0-9]*?|[0-9]*?) mi").unwrap();
                        fut_ok(re.captures_iter(str::from_utf8(&body).unwrap())
                            .into_iter()
                            .map(|item| {
                                item[1].to_string()
                            })
                            .collect())
                    })
            ),
    )
}

Note that I'm using str::from_utf8 to convert the body that is returned into a String that can be matched in a regular expression.

The request to get the total mileage for the year is very similar.

/// Get total miles for the year
fn get_year() -> Box<Future<Item=String, Error=Error>> {
    Box::new(
        client::ClientRequest::get("https://www.runningahead.com/scripts/<my_user_id>/last")
            .finish().unwrap()
            .send()
            .map_err(Error::from)
            .and_then(
                |resp| resp.body()
                    .from_err()
                    .and_then(|body| {
                        let re = Regex::new("(?s)<th>Year:</th><td>(.*?) mi</td>").unwrap();
                        let mat = re.captures(str::from_utf8(&body).unwrap()).unwrap();
                        fut_ok(mat[1].to_string())
                    })
            ),
    )
}

Remember that these functions both return Futures as we want to make the requests simultaneously and combine the results when they have both returned. In order to do this, the calls can be chained together and combined in an endpoint like so:

fn running(req: HttpRequest) -> Box<Future<Item=HttpResponse, Error=Error>> {
    get_year()
        .and_then(|miles_year| {
            get_latest().and_then(|miles_latest| {
                Ok(HttpResponse::Ok()
                    .content_type("application/json")
                    .body(serde_json::to_string(&MilesData {
                        year: miles_year,
                        latest: miles_latest,
                    }).unwrap()).into())
            })
        }).responder()
}

All of this got me most of the way to where I needed to be. However, since the calls to Running Ahead are https, SSL needs to be enabled for the Actix dependency. This can be done by adding the alpn feature to Actix:

[dependencies]
actix-web = { version="0.6", features=["alpn"] }

Once I had alpn enabled, all worked well on my local (macOS) machine. However, when I went to deploy to a Linux server with an nginx process to provide SSL, I was met with a strange error message:

Error occured during request handling: Failed to connect to host: OpenSSL error: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1245:

Very strange. After much Googleing, I found a reference that suggested trying openssl-probe. This tool searches out locations of SSL certificates on the system and was exactly what I needed. Using openssl-probe requires adding the dependency to Cargo.toml

[dependencies]
openssl-probe = "0.1.2"

and adding adding this to the src/main.rs

extern crate openssl_probe;

fn main() {
    openssl_probe::init_ssl_cert_env_vars();
    //... your code
}

Here is the final Cargo.toml

[package]
name = "stevezeidner-api"
version = "0.1.0"
authors = ["Steve Zeidner <steve@stevezeidner.com>"]

[dependencies]
futures = "0.1"
env_logger = "0.5"
actix = "0.5"
actix-web = { version="0.5", features=["alpn"] }
openssl-probe = "0.1.2"

serde = "1.0"
serde_json = "1.0"
serde_derive = "1.0"
json = "*"
regex = "0.2"

and source for src/main.rs

#![allow(unused_variables)]
#![cfg_attr(feature = "cargo-clippy", allow(needless_pass_by_value))]

extern crate actix;
extern crate actix_web;
extern crate env_logger;
extern crate futures;
extern crate json;
extern crate openssl_probe;
extern crate regex;
#[macro_use]
extern crate serde_derive;
extern crate serde_json;

use actix_web::{App, AsyncResponder, error, Error, fs,
                HttpMessage, HttpRequest, HttpResponse, pred, Result, server};
use actix_web::{client, middleware};
use actix_web::http::{Method, StatusCode};
use futures::{Future, future::ok as fut_ok};
use regex::Regex;
use std::{env, io};
use std::str;

#[derive(Debug, Deserialize, Serialize)]
struct MilesData {
    year: String,
    latest: Vec<String>,
}

/// Get total miles for the year
fn get_year() -> Box<Future<Item=String, Error=Error>> {
    Box::new(
        client::ClientRequest::get("https://www.runningahead.com/scripts/<my_user_id>/last")
            .finish().unwrap()
            .send()
            .map_err(Error::from)
            .and_then(
                |resp| resp.body()
                    .from_err()
                    .and_then(|body| {
                        let re = Regex::new("(?s)<th>Year:</th><td>(.*?) mi</td>").unwrap();
                        let mat = re.captures(str::from_utf8(&body).unwrap()).unwrap();
                        fut_ok(mat[1].to_string())
                    })
            ),
    )
}

/// get mileage for 5 latest runs
fn get_latest() -> Box<Future<Item=Vec<String>, Error=Error>> {
    Box::new(
        client::ClientRequest::get("https://www.runningahead.com/scripts/<my_user_id>/latest")
            .finish().unwrap()
            .send()
            .map_err(Error::from)
            .and_then(
                |resp| resp.body()
                    .from_err()
                    .and_then(|body| {
                        let re = Regex::new(">([0-9]*?.[0-9]*?|[0-9]*?) mi").unwrap();
                        fut_ok(re.captures_iter(str::from_utf8(&body).unwrap())
                            .into_iter()
                            .map(|item| {
                                item[1].to_string()
                            })
                            .collect())
                    })
            ),
    )
}

fn running(req: HttpRequest) -> Box<Future<Item=HttpResponse, Error=Error>> {
    get_year()
        .and_then(|miles_year| {
            get_latest().and_then(|miles_latest| {
                Ok(HttpResponse::Ok()
                    .content_type("application/json")
                    .body(serde_json::to_string(&MilesData {
                        year: miles_year,
                        latest: miles_latest,
                    }).unwrap()).into())
            })
        }).responder()
}

/// 404 handler
fn p404(req: HttpRequest) -> Result<fs::NamedFile> {
    Ok(fs::NamedFile::open("static/404.html")?
        .set_status_code(StatusCode::NOT_FOUND))
}

fn main() {
    openssl_probe::init_ssl_cert_env_vars();
    env::set_var("RUST_LOG", "actix_web=debug");
    env::set_var("RUST_BACKTRACE", "1");
    env_logger::init();
    let sys = actix::System::new("stevezeidner-api");

    let addr = server::new(
        || App::new()
            // enable logger
            .middleware(middleware::Logger::default())
            .middleware(middleware::DefaultHeaders::new()
                    .header("Access-Control-Allow-Origin", "*"))
            .resource("/running", |r| r.method(Method::GET).a(running))
            .resource("/error", |r| r.f(|req| {
                error::InternalError::new(
                    io::Error::new(io::ErrorKind::Other, "test"), StatusCode::INTERNAL_SERVER_ERROR)
            }))
            // default
            .default_resource(|r| {
                // 404 for GET request
                r.method(Method::GET).f(p404);

                // all requests that are not `GET`
                r.route().filter(pred::Not(pred::Get())).f(
                    |req| HttpResponse::MethodNotAllowed());
            }))

        .bind("127.0.0.1:8888").expect("Can not bind to 127.0.0.1:8888")
        .shutdown_timeout(0)    // <- Set shutdown timeout to 0 seconds (default 60s)
        .start();

    println!("Starting http server: 127.0.0.1:8888");
    let _ = sys.run();
}

The Well-Rounded Developer

April 7, 2013 · 4 min read

Should the role of a front-end developer be limited to only client-side technologies? I have asked myself this question a lot lately. I come from a background where, as a web developer, I typically work across the following development stack to design and build a product:

  • Design. Graphic design, page layout to how a page flows responsively across devices
  • Client-side code. HTML, CSS and JavaScript
  • Data. Flat data files, RESTful web services, relational DBs, noSQL DBs, ...
  • Server-side code - PHP, .NET, Ruby, Lisp, Node.js, Python, ...

Recently, I have moved into a position where I am focused on fewer core languages and technologies. As a result, I find myself thinking about the value of becoming an expert in one area of the stack. Is there more value in being an expert than being a well-rounded developer? The upside to becoming an expert in one subject is that there is more time to devote to exploring every nook and cranny of that subject's subculture. After all, web development is an art form. We are artists who should know our medium and our style. However, it is this very focus that often makes us lose sight of the broader picture. Programming is not about a particular technology or where it falls into the stack. Fundamentally it is about solving problems. An understanding of when to execute code on the server and at what point it's best handled it in the browser allows a developer to come up with the most efficient solution to the problem. Jiro, in Jiro dreams of Sushi, states that "Once you decide on your occupation, you must immerse yourself in your work. You have to fall in love with your work. Never complain about your job. You must dedicate your life to mastering your skill. That’s the secret of success and is the key to being regarded honorably." Jiro's skill is sushi. He dedicated his life to coming up with and mastering the techniques of making the best sushi. In order to accomplish this goal, Jiro had to master the art of finding the right seafood vendors, picking the best fish (tuna, octopus, shrimp), preparing the fish, making the rice and creating an experience for his customers. If there was a problem with any part of the cycle, the sushi would be sub-par. So it is with development. A problem, often caused by lack of knowledge, in any layer of the stack can create fragile dependencies, inefficient results or worse...buggy code. I have heard it said that new tech moves too quickly for well-rounded developers to keep up. This is true in the sense that no one developer has the time to become THE expert in every language, framework and platform that exists today. For a new developer, it can be overwhelming to look at the options that exist and wonder where to begin. However, if we take a step back, we are able to see that this pace of innovation is actually the fuel that drives the well-rounded developer. I say this for two reasons:

  1. The fundamentals of programming have not changed.
  2. The new frameworks and tools allow us to stand on the shoulders of giants.

Concepts such as object-oriented programming, data models and design fundamentals stay relatively stable over time. There are many different implementations of these principals and the principals themselves are expanded upon and refined over time, but much of the knowledge is transferable between languages and platforms. Differences are found mostly in syntax and philosophies. While syntax varies greatly among languages, the ones that tend to gain the most support are derivatives of earlier high-level languages such as Fortran and C. So, a lot of syntax knowledge is transferable as well. While the creator's philosophy of a particular language or framework can vary, there are a finite number of general philosophies in existence and code design patterns often transcend philosophies. As Solomon said: "What has been will be again, what has been done will be done again; there is nothing new under the sun." Programmers that have been working at their craft for a while often say that development is much easier now than it ever was. They are getting at the idea that we do not have to mess around with as much low level stuff as we used to. 30 years ago, programmers had to write device level drivers just to connect to a database or manage a network connection. Given the same amount of time today, we are able to create more feature-rich, complex applications because of the work that has been done by those that have come before us. Frameworks in any context (server-side and client-end) continue to build on this infrastructure and will only speed the pace at which we can develop amazing products. All of this is good in a general sense, but the real time-consuming part of becoming a well-rounded developer is spent in honing the details of one's craft. It is difficult to decide which framework is best suited for a project and even more tedious to learn all the exceptions and caveats that come with a particular language or framework. Because of this, community is a vital component to a well-rounded developer's workflow. Pick ecosystems that have good community support. Find the best framework to use for a project from the discoveries that others have made. Share what you learn when you develop for a platform with others. And above all, build new things.

Twitter API v1.1 Front-end Access

February 16, 2013 · 2 min read

Twitter is retiring their v1 API in March of 2013. This means all future API integrations must use the v1.1 API which requires authentication on any request (read or write). This is pretty straightforward using a PHP OAuth library or any OAuth server side implementation, but what if you wanted to implement something client-side? This can be accomplished by using the Yahoo! Query Language (YQL) to do the heavy lifting for us. A Twitter app is necessary to do any OAuth authentication. Go to https://dev.twitter.com/apps and create a new Twitter application. Once your application is created, click the "Create my access token" button to link this with your Twitter account. You will then have a Consumer Key, Consumer Secret, Access Token and Access Token Secret for this application that is associated with your Twitter account (see an example in the screenshot below). Make a note of these values. keys Next, create a text file that contains the keys from your application (leave the env line as it is).

env 'http://datatables.org/alltables.env';
set oauth_consumer_key = "kSCAs8K62d60v2RjT8Q" on twitter;
set oauth_consumer_secret = "oq1WlA0itYPoKqkg1VnLdPcrmq5qugXh0aYV62oIA" on twitter;
set oauth_token = "14409872-ygtLVnhRr8ABSioMu28DHD5iJ6Yj8U3CEozxlTwsD" on twitter;
set oauth_token_secret = "bqY5TXGSGwy72TmLgPgYz1jpW1riExHYNJVcqPIFCUE" on twitter;

Upload the text file to an accessible URL and go to the YQL console. Run the following query in the console replacing NAME and URL with whatever name you want to reference the stored data by and the url to the text file you just uploaded.

INSERT INTO yql.storage.admin (name,url)
VALUES ("NAME","URL")

The result of this query will contain an node (ex. store://my.host.com/name). **Make note of the value of this node.**Your application's OAuth keys are now stored in a database table that can be accessed from another YQL query. This is important because YQL also has community tables that allow for Twitter API requests. The following JavaScript (yes, some jQuery is used for simplifying the AJAX call) requests recent tweets from the @resource Twitter account and uses the stored keys for authentication. Just change the env value (line 14) to the the value of the node that you took note of earlier.

/* Set up the YQL query (http://developer.yahoo.com/yql/) */
var query = 'SELECT * FROM twitter.status.timeline.user '+
            'WHERE id="@resource" ';

/* Options for the YQL request
    * q = your query
    * format = json or xml
    * env = environment to pull stored data from
    */
var dataString = {
    q: query,
    diagnostics: true,
    format: 'json',
    env: 'store://my.host.com/name'
};

/* make the AJAX request and output to the screen */
$(document).ready(function() {
    $.ajax({
        url: 'https://query.yahooapis.com/v1/public/yql',
        data: dataString,
        success: function(data) {
            $('#returnData').html(JSON.stringify(data, undefined, 2));
        }
    });
});

That's pretty much all there is to making client-side Twitter API read requests using YQL to do the rest of the heavy lifting. A couple of things to keep in mind:

  1. The security on this is not great (it's more security through obscurity). Anyone can use the env link to execute read requests, but they don't directly have access to your keys. It's always better to implement this API server side if you have access to do so.
  2. Both APIs rate limit the endpoint calls. YQL has a 2,000 calls per hour per IP limit to their public API. Here is an explanation of Twitter's rate limits. Caching should be implemented to avoid hitting these limits.

Here is the Codepen Link for a working example. This concept was adapted from Derek Gathright's blog post.

SASS Ruby Extension to Check if File Exists

February 6, 2013 · 1 min read

CSS is executed client-side and so it cannot check for the existence of an image, font or other asset file being referenced. However, since Sass is written in Ruby, it allows for server-side calls by extending Sass via custom functions. Here is a custom function to check for the existence of a file:

module Sass::Script::Functions 
     # Does the supplied image exist? 
     def file_exists(image_file) 
          path = image_file.value 
          Sass::Script::Bool.new(File.exists?(path)) 
     end 
end

If this code is placed in a file named functions.rb, the Sass watch command would be:

sass --watch style.scss:style.css --require functions.rb

So, why would you ever need to check for the existence of a file at Sass compile time? One place I found it useful (I'm sure there are other uses) was when eliminating duplication of internationalized CTA (call-to-action) images. Canadian (or British) English is similar to U.S. English in many ways, but there are some words that are a different between the two (favorite vs favourite for example). The following Sass mixin selects a CTA image from a folder based on the lang attribute set on the page. In the case of Canadian English, it will first check to see if the image exists in the en-ca folder. If not, it will fall back to using the image from the en-us folder. This avoids duplication of the English images that are the same in both Canadian and U.S. English. The benefit of this is:

  1. Fewer total assets, so they are easier to maintain
  2. The total asset payload is smaller (especially important if used in a mobile app)
@mixin localeImage($image: null) { 
     [lang="en-us"] & { 
          background-image: url('assets/img/en-us/#{$image}'); 
     } 
     [lang="en-ca"] & { 
          $file: 'assets/img/en-ca/#{$image}'; 
          @if file_exists($file) { 
               background-image: url('#{$file}'); 
          } else { 
               background-image: url('assets/img/en-us/#{$image}'); 
          } 
     } 
     [lang="fr-ca"] & { 
          background-image: url('assets/img/fr-ca/#{$image}'); 
     } 
}