lørdag den 28. april 2018

Going serverless with Internet-of-Things and Azure Functions


The internet-of-things concept prevails in near every article on future IT prospects. And such a fascinating topic it is,too, the scenarios are limitless and the use-cases palpable, feels as if we should have been able to fulfill them ages ago. But let’s put the philosophical notions aside; I’d like to demonstrate my choice of technology for an IoT-solution based on the Azure serverless-offerings. So without further ado.

The internet-of-things - IoT from hence forward - may be realized in a myriad of fashions; I’ll cover today the use of Microsoft’s Azure cloud functionality, specifically the use of Azure Function Apps, Azure Service Bus and Azure Storage - with a very little bit of Azure Logic Apps thrown in for good measure. So it’s a whole lot of Azure, and the title speaks of ‘serverless’, too. In case you’re unfamiliar with the term, I’ll try my best at my own definition of the term: ‘Serverless’ refers to resources you do not need to carefully provision on your own. Of course we’re all well aware of how no request for a specific resource is ever truly ‘serverless’; until further, future notice there’s, however deep down in the many-layered stack, some physical hardware-element that is destined to provide the response to the request, there’s no getting around that yet. So the ‘serverless’ term should not be taken literally, as opposed to providing a moniker for getting resources up and running, and responding to requests, without having to care much about physical hardware restraints. Such that you do not need to, in honor of the phrasing of the term, set up an actual server to response to requests, rather you can simply deploy the response-mechanism itself to the Azure cloud, and instead of tending to hardware-specific metrics - how many CPU’s, whether to deploy load balancing, such considerations - you can focus on how you would like the response-mechanism to behave, for example how to scale. 

But I digress; you’re likely fully aware of the term and its implications. For the sake of this article, however, serverless also implies a consideration about the setup: my IoT devices will send data to, and receive data from, the aforementioned cloud services. This as opposed to letting requests be handled by a central broker such as an MQTT implementation, for example. This is a wholly deliberate choice; this particular implementation of an IoT architecture is suited for smaller projects, where the requirements can be the more readily fixed and thus we can make these assumptions that we would otherwise abstract away. Also, further limitations of this particular implementation is, to be later amended, a lack of device security focus and provisioning, entirely complicated topics in their own right. Make no mistake, this IoT implementation will work fine and have extremely limited financial impact. If you’re dealing with tens of thousands of distinct sensors and devices, across a multitude of locations, you will likely find better use of the suite of dedicated Microsoft IoT portfolio, and I recommend you then investigate that route further.

Tedious limitations aside, let’s dive into the fun stuff! Please keep the below architecture-diagram in mind as we go through a simple use-case, along the way deliberating on the technological choices.




The example will be of a temperature-sensor that sends a temperature-reading to our cloud-based back-end, optionally in turn retrieving a device-specific command to execute.

The device distributes its reading to an Azure Function (1) by way of a HTTPS REST-call. The function is responsible for storing raw data into an Azure Storage queue (2). Azure Functions are tremendously cheap to execute - you can have millions of executions for very low cost, thus ideal for a network of sensors that frequently sends data. Similar financial argument applies for the Azure Storage account that we will use to hold our sensor data. The queue storage is ideal for our purpose; it is designed to hold work-items to be processed at some later stage, at which point the data will simply be removed from the queue - and we can also specify a dedicated ‘time-to-live’ for the data, if needed. But, most importantly, we can make use of a dedicated Azure Function App trigger (3) that activates on new items in the queue. In this case specific case it retrieves the data from the queue and, from the raw sensor data, creates a more specific, and enriched, data model. We could do this in the first Azure Function, certainly, but the abstraction point is important in as it enables us to inject business logic here, if this is later needed. At present the only logic is in retrieving sensor-device information from an Azure storage table (4), and adding a bit of this information to the device-message that then goes into an Azure storage table that holds sensor data (5) - but later on we might add sensor-authentication in there, and at least now we have an abstraction point in which to implement this further down the line. The Azure Functions scale well; if your queue becomes crowded and you’re running the functions on a so-called consumption plan, for example, the functions will simply spin up more instances to handle the load. That really speaks to the core of the serverless term.

So, the second Azure function creates a more meaningful piece of data (6) - I add information about the specific type of sensor, for example - and sends this to an Azure Service Bus topic (7). The Azure Service Bus is a data ingestion and distribution mechanism, quite capable of receiving and handling millions of messages within an ambitious time-frame - just what we might need for a sensor-rich IoT solution. It is not the only tool-choice in regards to mass-message ingestion; Microsoft offers the dedicated IoT event hub, for example, and other vendors will have their own offerings. The reasons I chose it for are as follows, it’s cheap, fast, simple, and it plays extremely well with Azure Functions, as we’ll get around to shortly. The Azure Service Bus receives messages in two various ways: directly into a queue, not unlike the table storage queue albeit with some significantly enhanced features. For our purpose, however, we’ll utilize the Service Bus Topic feature, where we send messages into a so-called ‘topic’, which we may then subscribe to. This is the general publish-subscribe mechanism most often associated with various service bus implementations, and it works well with an IoT scenario such as this. In my specific implementation, I create a generic ‘message received’-topic, and into this I then send every piece of sensor-data that is received and enriched. This enrichment of data is mainly what facilitates a meaningful filtering of the message into dedicated subscriptions (8). A simple example might be in how a temperature sensor sends a reading to the receiving Azure function. The raw data is enriched with device-type information, so that we can infer the sensor-type - a temperature sensor - from the device-id. This enriched message is then sent to the service bus. A subscription to this topic will have been created and will pick up, for example, any messages from temperature sensors with a temperature exceeding an x degrees threshold.

The advantages of this, in conjunction with the use of the Azure Function App, becomes quite clear as we react to messages being picked up by our various topic-subscriptions, such as the ‘TemperatureHigh’-subscription. The subscriptions act as nothing more than message-filters and -routers. In order to consume the messages we have, as is almost always the case with the Azure platform, multiple ways of going about it. For our implementation we’ll implement another Azure Function, specific to messages being sent to the 'TemperatureHigh'-subscription. It’s that simple - we specify a subscription-name as we create the function, then deploy it, and the Azure infrastructure sees to it that the function is triggered appropriately (9). We do not have to poll continuously, we’re always hooked up, so to speak. This is a major advantage of integrating these two technologies, i.e. the possibility of quickly building an infrastructure that’s equally capable and reliable. The downside remains, of course, that it’s a very efficient yet hard-coupled architecture - there’s no replacing the service bus component with another cloud provider’s similar product. There’s always that trade-off that we need make; for my purposes, this coupling works extremely well: as messages arrive at their dedicated subscription, an equally dedicated Azure Function is triggered and the message is thus consumed and acted upon. The act, in this scenario, is in issuing an appropriate command (10) for the device itself, or possibly another device. For example, given a higher than usual temperature, we might issue a command to the device itself to sound an alert-buzzer. The command goes on an Azure Storage table (11), where we keep the history of issued commands, for audit trail and visualization purposes. It’s from this table the final Azure Function retrieves this command, upon a periodic request (12) from the sensor device that then executes it.

 So that’s an example of an IoT architecture based on Azure cloud technologies, without a central broker. Once again, it’s not to be considered best practice for all scenarios, please don’t implement it without regard for the circumstances pertaining to your own demands.

Now, for some technical aspects, I’d like to present some bits of the code of the Azure Functions, and thus go further into the details behind the implementation.

The Azure function into which is sent the raw data from the sensor is an HTTP-triggered one such. Here’s the code, annotated further below:



[FunctionName("QueueRawValue")]
public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "QueueRawValue")]HttpRequestMessage req, TraceWriter log)
{
    try
    {
        log.Info("C# QueueRawValue http trigger function processed a request.");

        string deviceId = req.GetQueryNameValuePairs().FirstOrDefault(q => string.Compare(q.Key, "deviceId", true) == 0).Value; 
        string value = req.GetQueryNameValuePairs().FirstOrDefault(q => string.Compare(q.Key, "value", true) == 0).Value;
        DateTime timestamp = DateTime.Now;

        CloudStorageAccount cloudStorageAccount = CloudConfigurationFactory.GetCloudStorageAccount();
        var queueClient = cloudStorageAccount.CreateCloudQueueClient();
        var queueReference = queueClient.GetQueueReference("iotl15queue");
        // Create the queue if it doesn't already exist
        await queueReference.CreateIfNotExistsAsync();

        RawIngestDataModel data = new RawIngestDataModel
        {
            DeviceId = deviceId,
            RawValue = value
        };
        string queueMessage = JsonConvert.SerializeObject(data, Formatting.None);
        var message = new CloudQueueMessage(queueMessage);
        await queueReference.AddMessageAsync(message);

        return deviceId == null
            ? req.CreateResponse(HttpStatusCode.BadRequest, "Please pass a deviceId on the query string or in the request body")
            : req.CreateResponse(HttpStatusCode.OK);
    }
    catch (Exception e)
    {
        // todo: do some logging
        throw e;
    }
}



Please disregard the deliberately blatant lack of security considerations and other such concerns, and focus on the basic functionality. The raw json-data from the sensor is put onto a storage queue for later processing. That’s all the function does, grabbing raw input data from the request parameters and storing this into a 'RawIngestDataModel' object, representing just that, raw input data in any shape or form. So we have a very basic way of capturing information and storing this into a queue, for eventual later - hopefully swift and efficient - processing. We could process the raw data at this stage, but this design provides us with an extension point we might need good use of, later on: if the number of request were to suddenly sky-rocket, the queue would easily scale to fit this requirement, by virtue of its built-in cloud capabilities thus regarding.

The next function, in turn, is then triggered by this queue-adding:

[FunctionName("ProcessRawQueueMessage")]
public static void Run([QueueTrigger("iotl15queue", Connection = "AzureStorageConnectionString")]string myQueueItem, TraceWriter log)
{
    try
    {
        RawIngestDataModel rawIngestData = JsonConvert.DeserializeObject<RawIngestDataModel>(myQueueItem);
        
        CloudStorageAccount cloudStorageAccount = CloudConfigurationFactory.GetCloudStorageAccount();
        var cloudService = new AzureTableStorageService(cloudStorageAccount);
        RegisteredValueModel registeredValueModel = CreateRegisteredDatamodelFromRawInput(rawIngestData);
        cloudService.SendRegisteredDataToTableStorage(registeredValueModel);
  
        // send to servicesbus
        string ServiceBusConnectionString = @"Endpoint=sb://myservicesbus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=ssharedkeyD/2zayhog=";
        string TopicName = @"devicemessagetopic";
        ITopicClient topicClient = new TopicClient(ServiceBusConnectionString, TopicName);

        // Create a new message to send to the topic.
        string messageBody = JsonConvert.SerializeObject(registeredValueModel);
        var message = new Message(Encoding.UTF8.GetBytes(messageBody));

        message.UserProperties.Add("DeviceId", registeredValueModel.DeviceId);
        message.UserProperties.Add("TextValue", registeredValueModel.TextValue);
        message.UserProperties.Add("NumericalValue", registeredValueModel.NumericalValue);
        DeviceModel deviceInformation = GetDataAboutDevice(registeredValueModel.DeviceId);
        message.UserProperties.Add("DeviceType", deviceInformation.DeviceType);

        // TODO: enrich with device-type, etc.

        // Send the message to the topic.
        topicClient.SendAsync(message);

        log.Info($"C# Queue trigger function processed: {myQueueItem}");
    }
    catch (Exception e)
    {
        // todo: do some logging
        throw e;
    }
}

private static RegisteredValueModel CreateRegisteredDatamodelFromRawInput(RawIngestDataModel rawIngestData)
{
    RegisteredValueModel registeredValueModel = new RegisteredValueModel()
    {
        DeviceId = rawIngestData.DeviceId,
        TextValue = rawIngestData.RawValue,
    };

    float attemptToParseValueAsNumerical;
    if (float.TryParse(rawIngestData.RawValue, out attemptToParseValueAsNumerical))
        registeredValueModel.NumericalValue = attemptToParseValueAsNumerical;

    return registeredValueModel;
}

/// <summary>
/// Get device-data from table storage
/// </summary>
/// <remarks>
/// Return dummy data for now.
/// </remarks>
private static DeviceModel GetDataAboutDevice(string deviceId)
{
    // TODO: implement this. Consider memory caching.

    DeviceModel temporaryDeviceModel = new DeviceModel()
    {
        DeviceId = deviceId,
        DeviceType = "TemperatureMeasurementDevice"
    };
    return temporaryDeviceModel;
}


Above function dequeues data from the queue, for further processing. The bulk of work is already done for us, in terms of how to connect to the queue and react on new entries into said queue, all this rather crucial functionality is already wired up for us and ready to be made good use of. Almost seems to good to be true, does it not. It’s well worth remembering the old adage, ‘if it seems to good to be true…’: we do get a tremendous amount of proven functionality ‘for free’, so to speak, but of course we also give up the possibility of having a say in how all this is achieved; we’re tied into the Azure platform. This is an acceptable choice for my particular IoT implementation, but may not be for you - it’s pros and cons and something you should take into serious consideration, as per your particular scenario. The above code will retrieve the first available raw data from the queue, and transform it into a RegisteredValueModel-object. Note how this inherits from the TableEntity-object; so we can store it within an Azure Table Storage table. For my purposes I'm using the device-id as partition-key on the table, as this seems a natural fit. That's behind the scenes and not shown here, for brevity's sake. From this table we'll later be able to do visualizations and historic compilations on the device data, though that's for a later blog-entry. The most important bit, for now, is in noting how the registered device data is sent to the Azure Service Bus topic, with the 'devicemessagetopic' name that indicates how, indeed, this topic receives all messages from all devices. Here stops, then, the responsibility of the Azure function. Now we can go and create subscriptions to this topic, as pertains to our specific use-cases. For example creating aforementioned subscription to dangerously high temperatures from my temperature-sensors. "temperatureHighSubscription" is my name for it, and given this name and a valid connection into the service bus, we can easily crate an Azure Function that triggers when the Azure service bus filters messages to this subscription:


[FunctionName("GeneralHighTempTriggerFunction")]
public static async Task Run([ServiceBusTrigger("devicemessagetopic", "temperatureHighSubscription", Connection = "AzureServiceBusConnectionString")]string mySbMsg, TraceWriter log)
{
    log.Info($"C# ServiceBus topic trigger function processed message: {mySbMsg}");

    RegisteredValueModel dataFromServiceBusSubscription = JsonConvert.DeserializeObject<RegisteredValueModel>(mySbMsg);

    // Add to commandModel history data table
    DeviceCommandModel deviceCommand = new DeviceCommandModel()
    {
        DeviceId = dataFromServiceBusSubscription.DeviceId,
        CommandText = "SoundAlarm",
        SentToDevice = false
    };

    CloudStorageAccount cloudStorageAccount = CloudConfigurationFactory.GetCloudStorageAccount();
    var cloudService = new AzureTableStorageService(cloudStorageAccount);
    cloudService.SendDeviceCommandToTableStorage(deviceCommand);

    // Send notification of high temperature to azure logic app:
    INotificationService azureLogicAppNotificationService = new AzureLogicAppSendPushOverNotificationService();
    NotificationModel notification = new NotificationModel()
    {
        Title = "Temperature-alarm",
        Message = $"Temperature-device {dataFromServiceBusSubscription.DeviceId} at {dataFromServiceBusSubscription.NumericalValue:F} degrees",
        From = "IOT",
        To = "pushOverUserId" 
    };
    await azureLogicAppNotificationService.SendNotification(notification);
}

Couldn't be much easier, it's already wired up by design and the functionality to act on the trigger is all that remains for us to implement. In my case, the subscription is a call to action, namely triggering a device-specific command to act on the high temperature, and "SoundAlarm". All commands are stored into an Azure storage table, for both audit trail and command repository: all devices may, if so configured, continuously poll this table for any command that needs be executed by them - identified by their device-id. A quick Azure Function, http-triggered, delivers the goods:


/// <summary>
/// Retrieves the latest non-yet-retrieved command for a device, if any such command exists.
/// </summary>
[FunctionName("GetCommandFromServicesBus")]
public static HttpResponseMessage Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "GetCommandFromServicesBus")]HttpRequestMessage req, TraceWriter log)
{
    string deviceId = req.GetQueryNameValuePairs().FirstOrDefault(q => string.Compare(q.Key, "deviceId", true) == 0).Value; //req.Content["deviceId"];

    CloudStorageAccount cloudStorageAccount = CloudConfigurationFactory.GetCloudStorageAccount();
    var cloudService = new AzureTableStorageService(cloudStorageAccount);
    DeviceCommandModel commandForDevice = cloudService.GetCommandFromTableStorage(deviceId);

    return commandForDevice == null ?
        req.CreateResponse(HttpStatusCode.OK) // no commands found, just OK status
        :
        req.CreateResponse(HttpStatusCode.OK, JsonConvert.SerializeObject(commandForDevice)); // command found, return as json.
}

And so round and round it goes; devices ships data, the data is enriched, sent to a services bus and maybe/maybe not picked up by a subscription, which in turns triggers a command, so on and so forth.

I haven't touched upon the use of Azure Logic Apps, and I shan't go into them safe but note that I do implement a couple, for notification purposes - for example in the above 'GeneralHighTempTriggerFunction' code. Azure Logic Apps gives us the ability/possibility of gluing many Azure offerings together, but that's not my use-case as yet. You can have an Azure Logic App listen for subscription-hits on your services bus, for example, and compile multiple messages into a single command to a device, or vice versa. The graphical interface with which you create the Logic Apps is intuitive, yet offers great levels of complexity in the execution. You could also make use of it as an elaborate extension point, and out-source business logic to others while you take care of data yourself, for example.

So that's a bit of inspiration, I hope, on going serverless with Azure and getting those IoT-message flowing. I won't lie, getting just a dozen devices up and running and sending data and commands back and forth is fun to watch - and those Azure offerings make it simple and mostly intuitive to get started. Of course there's tons of stuff I haven't covered in detail in the above, and I'll leave you to second-guess the missing functions that'll enable to code to compile. It's meant as an appetizer, and I'll look forward to learning about your particular 'main course', so please by all means drop me a note about what you're doing with Azure and IoT. 

IoT projects are fun to be part of, I wish I could do more of it but, to my chagrin, my career-path never led me down that road besides trying it out for fun at home. I hope the above will inspire you in your endeavors. Thanks for checking it out, and if there's anything I can do to help out get in touch me and I'll try and do that.

onsdag den 28. marts 2018

torsdag den 8. december 2016

Using C# .NET for auto-responding to SurveyMonkey surveys


If you need to do auto-responding to SurveyMonkey surveys, you can perchance use the below C# code as a source of inspiration.

I used it for testing a survey that we had running internally. Please bear in mind that your company's SurveyMonkey-subscription might put a cap on the number of survey-responses.

Basically it works by utilizing the Selenium library (download it care of Nuget) for web-page testing. I used the Chrome web-driver because it didn't store cookies or history for the browser session. I also included a unique temporary value for when accessing the survey, or I would get the "you've already responded"-message.

    class Program
    {
        static void Main(string[] args)
        {
            for (int i = 0; i < 58; i++)
            {
                RespondToSurvey();
                Console.WriteLine(i);
            }
        }

        private static void RespondToSurvey()
        {
            OpenQA.Selenium.Chrome.ChromeDriver driver = new OpenQA.Selenium.Chrome.ChromeDriver();

            string baseUrl = $@"https://da.surveymonkey.com/r/?tempValue=" + DateTime.Now.Ticks;

            driver.Navigate().GoToUrl(baseUrl + "/");

            Actions actions = new Actions(driver);
            IWebElement radioBtn = driver.FindElementById("72374030_573987619");
            actions.MoveToElement(radioBtn).Click().Perform();

            var element = driver.FindElement(By.Name("surveyForm"));
            element.Submit();

            driver.Close();
        }
    }

søndag den 4. december 2016

Connecting an ESP8266-12 to a DS1820 thermometer and do a http post data to the internet.

Here's how to connect an ESP8266-12 to a DS1820 thermometer and perform a

Pre-requisites! I'll be using these components:

@ A standard ESP8266-12 on a breakout-board such as the one shown.
@ Three 4.7k resistors.
@ A 3v battery (I use a CR123a) for powering the ESP826612.
@ A DS1820 (i use the DO-92 packaged version below)

The following requirements should be met:

@ All connections, including those on the The ESP 8266-12 on the breakout-board, are solid and conduct power as they should.
@ The battery holds at least 2.8v of power.
@ The ESP8266-12 has been programmed with the following sketch:



#include OneWire.h
#include DallasTemperature.h
#include ESP8266WiFi.h
#include ESP8266WiFiMulti.h
#include ESP8266HTTPClient.h

#define ONEWIRE_PIN 13

OneWire oneWire(ONEWIRE_PIN);
DallasTemperature sensors(&oneWire);

boolean TempSensorAvailable = false;
DeviceAddress TempSensor;
float tempCtry1;
float tempCtry2;

//AP definitions
#define AP_SSID "your wifi-network name here"
#define AP_PASSWORD "wifi password here"

void setup() {

//enable this to test from the arduino serial monitor
Serial.begin(74880);

sensors.begin();

Serial.print("Found ");
Serial.print(sensors.getDeviceCount(), DEC);
Serial.println(" OneWire device(s).");

// report parasite power requirements
Serial.print("Parasite power: ");
if (sensors.isParasitePowerMode()) Serial.println("ON");
else Serial.println("OFF");

if (!sensors.getAddress(TempSensor, 0)) {
Serial.println("No OneWire Device Found");
} else {
TempSensorAvailable = true;
Serial.println("OneWire Device Found");
sensors.setResolution(TempSensor, 12);
}
}

void loop() {

wifiConnect();
postTemperature();


delay(60 * 1000);
}

void postTemperature()
{
sensors.requestTemperatures(); // Get temprature
tempCtry1 = sensors.getTempC(TempSensor); // save temprature
sensors.requestTemperatures(); // Get temprature
tempCtry2 = sensors.getTempC(TempSensor); // save temprature

HTTPClient http;
http.begin("http://");
http.addHeader("Content-Type", "application/json");
String postMessagePart1 = String("{ 'sensorId' : 'L15-Out1', 'temperature' : '");
String postMessagePart2 = String("', 'postAttempts' : '");
String postMessagePart3 = String("', 'batteryVoltage' : '");
String postMessagePart4 = String("' }");
String postMessage = postMessagePart1 + ((tempCtry1+tempCtry2)/2) + postMessagePart2 + retries + postMessagePart3 + vdd + postMessagePart4 ;
int httpCode = http.POST(postMessage);
Serial.print("http result:");
Serial.println(httpCode);

http.writeToStream(&Serial);
http.end();

if ( httpCode != 200 && httpCode != 201)
{
delay(1000);
postTemperature();
}
}

void wifiConnect()
{
Serial.print("Connecting to AP");
WiFi.begin(AP_SSID, AP_PASSWORD);
while (WiFi.status() != WL_CONNECTED) {
delay(1000);
Serial.print(".");
}

Serial.println("");
Serial.println("WiFi connected");
}



Don't forget to import the libraries into the arduino IDE environment.
Given the above is in order, go ahead and connect the components as per the following pictures:



- The RESET and GPIO16 pins should be connected - this enables the ESP 8266-12 to wake up from deep sleep mode.
- The GPIO0 and GPIO2 should be connected to VCC with a 4.7k resistor in the middle. This is to prevent a so-called 'zombie-mode', in which the ESP8266-12 has trouble waking up from deep sleep.
- The data-line and the VCC line of the DS1820 should be joined by a 4.7k resistor, or the temperature will not be read.
- The data-line of the DS1820 should be connected to the GPIO13-pin of the stand-alone ESP8266-12, as this corresponds with the "#define ONEWIRE_PIN 13" statement of the code.


With the ESP connected like so, my ESP8266-12 happily does a http post to my web-service every 60 seconds, before repeating the cycle.

You should put the ESP8266-12 into deep sleep mode if you power your thermometer via battery.


fredag den 25. november 2016

Parallelizing data-processing with the TPL DataFlow library

I highly recommend the TPL - Tasks Parallel Library - 'DataFlow' library. It's a very good abstraction of the TPL itself, easy to use. I was in a situation where I had to parallelize the execution of a file-converter, which in a single instance-run used only 15% CPU. By parallelzing it I was able to utilize 100% CPU and finish the conversion-job much, much quicker.

IT works with .NET 4.5 and onwards, and I believe I saw a .NET Core version, too. But here's the .NET 4.5 version: https://www.nuget.org/packages/Microsoft.Tpl.Dataflow

Install with NuGet and look to the web for examples of use. Note that many of the examples deal with async-awaitable methods, but the library works quite well with synchronous tasks as well. I had no need for async use, so my inspiration-example below is synchronous tasks only:

public void ConvertFilesInFolder(string sourceFilesFolderPath)
{


string[] filePathsAndNames = getFilePathsAndNames(sourceFilesFolderPath);

// define a new 'ActionBlock', that you can push Tasks to.

var block = new ActionBlock(foobar =>
{
ConvertAndMoveTheFile(foo);
}, new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 6 // 6 simultanous conversions (limit of my 3rd-party conversion library-licence)
});

// Go ahead and add conversion-Tasks to the action-block:
foreach (string filePathAndName in filePathsAndNames)
{
block.SendAsync(filePathAndName);
}

block.Complete(); // that's enough jobs...
block.Completion.Wait(); // ... now go ahead and execute until they're done.


/* Note that as I set the max-degree-of-parallelism to 6, we're limited to this number of executed tasks at the same - parrallel - time. As soon as one task completes, another is retrieved from the action-block 'queque' */
}

public void ConvertAndMoveTheFile(string filePathAndName)
{
try
{
ConvertFile(filePathAndName);
moveOriginalFileToArchive(filePathAndName);
}
catch (Exception ex)
{
// log, but otherwise suppress and move to next.
}
}


I found this blog-post very helpful in getting introduced and started with the library.

tirsdag den 15. november 2016

Programming an Arduino Pro Mini using Arduino IDE and a FT232RL FTDI USB to TTL Serial Adapter


I've just taken delivery of a couple of Arduino Pro Mini's and want to share how to program them, using the Arduino IDE and a FT232RL FTDI USB to TTL serial adapter.

The Arduino Pro Mini has the same feature-set as the Arduino Uno, for example, but is much, much smaller and doesn't consume the power the other Arduino's do, so it may be used in battery-powered projects or stuff where space is limited.

I'm told there's a 5v and a 3.3v version - I got the 5v ones.

Please refer to the below pictures for how to wire the units up. It's really quite easy, in as much as the pins on the FT232 corresponds exactly to the pins of the Arduino Pro Mini. So I'm simply using 6 female-to-female jumper cables, not even separated:



Remember to set the FT232 to 5v if you're programming the 5v version of the Pro Mini, and 3.3v if you have the 3.3v Pro mini. That's really the only thing that's possible to mess up! The 5v/3.3v setting is made by setting this jumper to its correct position (which should be marked on the board):



Note that the Arduino Pro Mini shouldn't be connected to its' own power source - the FT 232 will deliver the power for now. Now, as you connect the units together via the jumper cables, you may be surprised to note how the Arduino starts blinking red, and furthermore there's a strong red LED that's turned on. Don't be alarmed; at least with my version of the Pro Mini, the red LED simply signifies the unit is powered on, and the blinking red stems from the fact that the unit was pre-programmed with the standard 'blink' sketch - in which the on-board LED blinks once every second.

Fire up the Arduino IDE, and in the 'tools->boards' menu select the 'Arduino Pro or Pro mini' variant. Select the right COM-port and hit 'upload' - that's really should be it.



lørdag den 12. november 2016

SOLID principles, in layman's terms: Single Responsibility

Raison d'être: I set out to write about the SOLID software development principles. Specifically, my aim was/is to make these things more understandable to other developers who, much in the way as yours truly, found it troublesome to have to decipher lengthy, complex articles and books on the matter. These principles are for everyone to learn and use, but I found that they were hard to fully grasp; quite possibly because I come from a non-English speaking background. So with this series of articles I'm setting out to try and de-mystify the principles, with only the best intentions in mind. The principles apply to many layers of software development. In my articles, I specifially aim to describe them as they relate to programming. I hope it'll be of use to you. Thank you for stopping by.

This will be a 5 article-series about SOLID. SOLID is all the rage; at least as far as the job-adds I'm reading are concerned; "you are expected to honor the SOLID principles" etc. So what exactly is SOLID about? Plain and simple, it's a set of guide-lines in developing object-oriented systems. They are a group of concepts that have proven themselves valuable for a great many people coding a great many pieces of software. A tale told by your elders in software engineering, if you will, that you will want to pay heed to so you can boast on your CV that you're into the SOLID principles - and you'll be a better developer for knowing them, I promise you that. Heck, you're a better developer for simply _wanting_ to know them!

[S]OLID - The Single Responsibility principle

The Single Responsibility is the first in the set of principles that make out the SOLID acronym. It states that a class should have only one single reason to change. And one reason only.

And why is this a good thing? Because the clutter from having a class do too much stuff is long-term damaging to whatever you're trying to design.

That distinction - "a single reason to change" - is important. The principle is not "a single thing to do"; a class may certainly perform one or more related tasks within the same scope of the class' responsibility. For example, a class which performs logging to a text-file and a database would not be considered breaking the single responsibility principle. If you were to add functionality to log to, say, a central systems management tool, that's not breaking the principle: that class is still only having one reason to change, and you have merely exercised that reason. But if you were to introduce functionality to send a notification on, say, the log-disc being close to full - that's breaking the principle: new stuff to do, unrelated to the primary responsibility.

Below is listed the primary risks involved in not following the principle, as well as the benefits of, well, following it. But first let's go about how to fulfill the principle. It's very simple, really: Go through your classes and look for those that perform two or more different actions. For example: do you have a class that renders an object AND saves it to disk? Not good - the class is clearly having more than 'one reason to change' - it might be a change in the rendering, or a change in the persistence to disc-functionality. Take the below example:

public class Logger
{
    public Logger()
    {
    }

    public void WriteLog(string message)
    {
        // do something to write the message to disk log
    }

    public void GenerateLoggingStatistics()
    {
            // do something to retrieve some stats
    }
}

... which finds us with a Logging-class that writes to a log-file and is capable of generating some logging statistics as well. I'm not familiar with any form of tooling which may aide in this kind of 'single-context search', unfortunately. All I can offer is this, that you'll likely be looking to keeping your classes short. Big classes - my personal, general rule of thumb is 200 lines - marks a danger-sign that the class has too much on its plate. Further to that point, It's not necessarily a danger-sign that you have a multitude of small classes as opposed to a few heavy ones - likely quite the opposite. Keep'em short and lean. In regards to how to facility future changes without breaking the principle, look to hide implementation by interfacing, and there's also the magnificent strategy-pattern to be utilized.

That's it - that's the single responsibility principle, the first of the five. Not so terribly difficult, but then again that's the beauty of the principles, that they're easy to learn but difficult to master. I best saw the adverse of applying the principle described as a 'swiss army-knife class'; if you keep that image in your mind as you traverse your own, you're bound to avoid ending up with exactly those kind of 'do-all' classes. And as an extra positive, classes that are lean and mean are per definition easier to test, easier to read, easier to maintain. And that's what the Single Responsibility principle is all about.