Processor R Code
The R code processor allows you to transform data using R scripts. R is a free software environment for statistical computing and graphics that is supported by the R Foundation for Statistical Computing. The R language is widely used among statisticians and data miners for developing statistical software and data analysis. R’s popularity has increased over the past years.
You need to have experience in programming and to be comfortable with R syntax enough for writing simple scripts in order to work with the R code processor.
Topics you need to be familiar with include (but are not limited to):
- Data structures
- Control flow tools
- Working with files (opening, reading, writing, unpacking, etc.)
- Modules and packages.
We recommend you begin with the official R website and R manuals if you are not familiar with R syntax but have experience in programming in other languages. If you are a complete novice in programming, we suggest you check the tutorials for beginners.
- OS Debian Jessie
- R Version - 3.5
- Libraries available
- car, caret, caTools, ChannelAttribution, Cubist,
- data.table, data.tree, digest, doParallel, dplyr,
- earth, ellipse, e1071,
- forecast, foreach,
- gam, gbm, gdata, ggplot2, gsl,
- ipred, ISOweek,
- kernlab, klaR,
- lattice, lubridate,
- MASS, mda, mgcv, mlbench,
- nlme, nnet,
- party, pamr, pls, plyr, pROC, proxy, purrr,
- randomForest, RANN, reshape2, R6, RcppArmadillo, rgdal,
- spls, sqldf, stringi, stringr, subselect, superpc,
- testthat, tidyverse, timeDate, tree
Data In/Data Out
Files for processing and transformation can be located in
Output files should be written in
Learn more: about folder structure in configuration here.
Code Editor, Script
This field is where you write R script to process the data.
|Script location and paths||
A script file is located in the /data folder. You can use the absolute or relative path to access the data files.
The analog of console log in Meiro Integrations is the activity log. If you run the script
|Indentation||The Script field supports indentation in R and creates an indent automatically when changing to a new line if needed. This helps the user to keep order in the command. However, if you write the script in your local IDE and copy-paste it, be sure that you do not mix tabs and spaces.|
|Script requirements||While working with data flow, you will need to work with files and tables a lot. Generally, you will need to open the input file, transform the data and write it in the output file. There are no special requirements to the script except the R syntax itself. You can check a few examples of scripts solving common tasks in the section below.|
This example illustrates a simple code that imports libraries, imports an open dataset from an external source (URL), saves the response, writes output to the table and finally prints output to the console log. Usually, you will need to open the file from the input bucket, which was downloaded using a connector, but in some cases requesting the data from external resources can be necessary.
#import necessary libraries library(data.table) #request URL and save response to variable titanic titanic<-fread("http://web.stanford.edu/class/archive/cs/cs109/cs109.1166/stuff/titanic.csv") # write output to table and print first 10 rows write.csv(titanic, file = "out/tables/titanic.csv", row.names = FALSE) print(titanic[1:10,"Survived"])
This example illustrates opening, filtering and writing a CSV file. In this script, we will use the Titanic dataset which contains data of about 887 of the real Titanic passengers. This dataset is open and very common in data analytics and data science courses.
Let’s imagine we need to analyze the data, compute the mean of all passengers who survived and male passengers who survived, and want to write the data in 2 separate files.
Data in this example was previously downloaded using the connector component. We will show below how you can reproduce the code on your computer.
library(dplyr) #for computing mean and piping operator usage # titanic: # Survived # Pclass # Name # Sex # Age # Siblongs.Spouses.Aboad # Parents.Children.Aboard # Fare #read table titanic_in<-read.csv("in/tables/titanic.csv",stringsAsFactors = FALSE) #compute mean of ages and fares of all passengers and store it in data frame mean_all<-data.frame(titanic_in%>%summarise(mean_age=mean(Age),mean_fare=mean(Fare))) #filter table for only male passengers who survived data_selected_male<-subset(titanic_in,Sex=='male'& Survived==1) #compute mean of ages and fares of those male and survived and store it in data frame mean_male_survived<-data.frame(data_selected_male%>%summarise(mean_age=mean(Age),mean_fare=mean(Fare))) # write output means to tables write.csv(mean_all, file = "out/tables/mean_all.csv", row.names = FALSE) write.csv(mean_male_survived, file = "out/tables/mean_male_survived.csv", row.names = FALSE)
Reproducing and debugging
If you want to reproduce running the code on your computer for testing and debugging, or you want to write the script in a local IDE and copy-paste it in Meiro Integrations configuration, the easiest way to do this would be to reproduce the structure of folders as below:
/data script.r /in /tables /files /out /tables /files
The script file should be located in the
/data folder, input files, and tables in the folder
in/ in the corresponding subfolders, output files and tables in output/files and output/tables respectively.
To reproduce Example 2, you will need to download the dataset and save it to the folder
titanic.csv, paste the code from the example to the script file and run it. New files will be written to the folder