CloudWright as a Lightweight Retool API Resource

January 17, 2020

Retool is a powerful tool for building and sharing internal UI applications. The editor makes it easy for semi-technical users to display tables, issue SQL queries against internal resources, and make API calls.

While Retool integrates with many SaaS and Cloud services, it is often difficult to embed complex backend logic into the front-end actions Retool provides:

  • Applications may need native integrations to cloud services not easily performed via API calls
  • Performing complex backend actions within Retool becomes unwieldy
  • Users want control over error handling, tracking, and logging for connections to company resources
  • Users may also want to invoke these operations outside of Retool as generic API calls.
  • Operations may want to access resources within a VPC network without running Retool on-premise

Retool has thought of this, and makes it easy to query HTTP APIs, so complex logic can be put into an API. However, building and maintaining an API is an intimidating requirement for users who don't already maintain a microservice framework.

CloudWright makes it easy to build Applications which act as simple, secure, scalable API endpoints — and then use those endpoints in Retool.

CloudWright <-> Retool

CloudWright is an ideal home for your lightweight API endpoints:

  • CloudWright applications run as serverless functions — cost-effective, and no servers to maintain
  • CloudWright supports granular, token-based auth, perfect for Retool API resources
  • CloudWright logs and tracks your apps, making monitoring, debugging, and error handling easy

This article shows how to build a lightweight CloudWright-backed API and use it in a Retool App — in just a couple minutes.

Customer File Delivery

For this example, we'll use Retool + CloudWright to build a tool for a Customer Implementation team.

Our example business generates — perhaps via a scheduled report — files we deliver to customers. After generation, these files are put in a staging bucket on S3 to await manual QA and delivery. The Customer Implementation team's job, for each generated file, is to:

  1. Manually QA the file to making sure the data looks OK
  2. Copy the file to a customer-specific S3 bucket, found via a MySQL query
  3. Email the customer to let them know the file has been delivered
  4. Update the file status in MySQL

Or, as we would chart out this workflow to the implementation team:

CloudFile Workflow

We want to give the Customer Implementation team a tool which automates most of this workflow, to avoid tedious, manual, error-prone operations. We'll also expose these operations via a simple UI where the team can look up the status of deliveries, deliver a file after QA, or reset the delivery.

Because our file delivery logic is going to be complex — we want to perform file validation and send emails — we'll move the delivery logic into a back-end API, and make simple API calls from the UI.

This article shows how to combine CloudWright and Retool to build this tool — CloudWright to power the backend operations, and Retool to provide a powerful, authenticated internal UI.

Building an API endpoint in CloudWright

In this section we'll build the API in CloudWright — setting up our modules, writing the code, and setting up an HTTP Trigger.

Modules

A CloudWright Module is a configure-once, use everywhere integration to a specific internal or external service. For this application, we'll use three Modules:

  • The 'AWS API' Module gives us an authenticated S3 client for file operations
  • The 'Gmail' module lets us send emails as an authenticated service user
  • The 'MySQL' module lets us query our internal database for files and customer accounts
Setting up the modules

We'll show how to these modules up from scratch, but remember — you'll usually be able to re-use modules your team has already created. For a full explanation of how to create and configure Modules, check out the CloudWright docs.

S3

CloudWright exposes all AWS APIs through the 'AWS API' Module template. Configuring this module is easy — you'll just need your AWS Access Key ID and Secret Key (to learn how to create these credentials, check out the Amazon documentation)

AWS

To send emails from a service Gmail account, we'll need to use the 'Send Email with Gmail' template, and authenticate as a service user — here, we'll use example-user@cloudwright.io:

Gmail

Connecting to a database is easy with the MySQL Module:

MySQL

We'll connect to this database directly — if your database is behind a firewall, you'll need to set up a Deployment Zone with access to your resouces.

To automate this workflow in CloudWright, we'll need to create a new Application — the quickstart guide walks you through how to create new application (you'll want to read that tutorial first).

In our new Application, we'll attach the three Modules we need — MySQL, AWS, and Gmail:

MySQL

With these Modules added, we're ready for the important part — building the tool.


Building the application

We need our CloudWright API to expose three operations to Retool:

  • show — a list of file deliveries to show our CI team
  • deliver — validate a file, mark it as QA-complete, and send it to the customer
  • reset — revert a delivery and apologize to the customer

We'll cover the important bits of the Application in this article — there's a bit of boilerplate we won't cover, but you can check out the whole application below.

See the whole application
from urllib.parse import urlparse
import os
import sqlalchemy as sql

mysql = CloudWright.get_module("mysql")
s3 = CloudWright.get_module("aws_api").client('s3')
gmail = CloudWright.get_module("gmail")

operation = CloudWright.inputs.get("operation")
file_id = CloudWright.inputs.get("file_id")

if operation == 'show':

    # Simple list of all files.  In reality, this would be filtered on recency or status
    fields = ["files.id", "source_path", "status", "delivered_path", "name", "email"]
    show=f"SELECT {','.join(fields)} FROM files INNER JOIN customers \
        ON files.customer_id = customers.id"
    res = mysql.execute(show)

    # Map all of the raw columns into a dict for the retool UI
    CloudWright.response.set_value({"values": list(map(lambda result: dict(zip(fields, result)), res.fetchall()))})

# On 'deliver'
#  - Compute the S3 delivery destination, and copy the file
#  - Notify the customer that their file has been delivered
#  - Update the record as 'delivered' in the database
if operation == 'deliver':

    # Get information about the file we want to deliver, and the customer it is for
    fileq=sql.text("SELECT customer_id, source_path FROM files where ID=:fid")
    fetched_file = mysql.execute(fileq, fid=file_id).fetchone()

    cust=sql.text("SELECT email, delivery_bucket FROM customers where ID=:cid")
    customer = mysql.execute(cust, cid=fetched_file[0]).fetchone()

    print(f"Delivering file for customer: {customer[0]}")

    # Figure out where we want to deliver the file (a subdir of the customer's output bucket)
    source_location = urlparse(fetched_file[1])
    deliver_to = urlparse(customer[1])
    delivery_key = os.path.join(deliver_to.path.lstrip('/'), os.path.basename(source_location.path))
    delivered_path = f"s3://{deliver_to.netloc}/{delivery_key}"

    # Copy the file to the output using the S3 API 
    print(f"Delivering file to: {delivered_path}")
    
    source={
        'Bucket': source_location.netloc, 
        'Key': source_location.path.lstrip('/')
    }

    # We'll do some basic validation now that the file is OK to deliver:
    obj = s3.get_object(**source)

    # 1) We check that the file isn't empty (or almost empty)
    if obj['ContentLength'] < 40:
        raise Exception('File was too small — likely a malformed delivery')

    # 2) We'll check that the header row had the expected number of columns
    header = next(obj['Body'].iter_lines())
    if len(header.decode().split(',')) != 3:
        raise Exception('File header had an unexpected number of fields')

    s3.copy_object(CopySource = source, 
        Bucket = deliver_to.netloc, 
        Key = delivery_key
    )

    # Notify the customer
    print(f"Emailing customer: {customer[0]}")
    gmail.send_email(
        f"File delivered!", 
        f"Your file is ready! You can find it here: {delivered_path}", 
        customer[0]
    )

    # Mark it as delivered in our database
    print(f"Updating record status to delivered: {file_id}")
    query = sql.text("UPDATE files SET status=1, \
        delivered_path=:path WHERE ID=:file_id")
    mysql.execute(query, path=delivered_path, file_id=file_id)

    CloudWright.response.set_value(f"Delivered file {file_id}!")

# On 'reset'
#  - Delete the file from S3
#  - Set the file status back to 'undelivered'
#  - Tell the customer that the file isn't ready yet 
if operation == 'reset':

    # Fetch info about the customer and the file we want to reset
    fetched_file = mysql.execute(sql.text("SELECT customer_id, delivered_path FROM files where ID=:file_id"), file_id=file_id).fetchone()
    customer = mysql.execute(sql.text("SELECT email FROM customers where ID=:customer_id"), customer_id=fetched_file[0]).fetchone()

    # Wipe it from the S3 destination
    path = fetched_file[1]

    if path is not None:
        delivered_path = urlparse(path)
        s3.delete_object(
            Bucket = delivered_path.netloc, 
            Key = delivered_path.path.lstrip('/')
        )

        # Set the status back to undelivered
        rev=sql.text("UPDATE files SET status=0, delivered_path=NULL WHERE ID=:fid")        
        mysql.execute(rev, fid=file_id)

        # Let the customer know that the file isn't ready after all
        gmail.send_email("Whoops!", "Your file will be ready soon!",  customer[0])

        CloudWright.response.set_value(f"Reset file {file_id}!")

We'll take two inputs to this application — a file ID to process and an operation to perform on the file (show, deliver, or reset):

operation = CloudWright.inputs.get("operation")
file_id = CloudWright.inputs.get("file_id")

We'll import the three Modules our app uses. We only need the S3 Client from boto3, so we'll grab it here:

mysql = CloudWright.get_module("mysql")
s3 = CloudWright.get_module("aws_api").client('s3')
gmail = CloudWright.get_module("gmail")

These Modules power each of our three API operations:

deliver

Delivering the file requires two database queries — first to look up the file by ID, and then the customer which the file was generated for:

fileq=sql.text("SELECT customer_id, source_path FROM files where ID=:fid")
fetched_file = mysql.execute(fileq, fid=file_id).fetchone()

cust=sql.text("SELECT email, delivery_bucket FROM customers where ID=:cid")
customer = mysql.execute(cust, cid=fetched_file[0]).fetchone()

We'll perform four backend operations:

  1. Validate that the file looks OK to deliver. Using the boto3 S3 API, we can perform file operations like reading the first row and checking file metadata:
    obj = s3.get_object(**source)

    # 1) We check that the file isn't empty (or almost empty)
    if obj['ContentLength'] < 40:
        raise Exception('File was too small — likely a malformed delivery')

    # 2) We'll check that the header row had the expected number of columns
    header = next(obj['Body'].iter_lines())
    if len(header.decode().split(',')) != 3:
        raise Exception('File header had an unexpected number of fields')
  1. Move the file to the final S3 destination:
    source={
        'Bucket': source_location.netloc, 
        'Key': source_location.path.lstrip('/')
    }

    s3.copy_object(CopySource = source, 
        Bucket = deliver_to.netloc, 
        Key = delivery_key
    )
  1. Send an email to the customer (the customer's contact email is stored in our database model):
    gmail.send_email(
        "File delivered!", 
        f"Your file is ready! You can find it here: {delivered_path}", 
        customer[0]
    )
  1. Update the file status in the database to delivered:
    query = sql.text("UPDATE files SET status=1, \
        delivered_path=:path WHERE ID=:file_id")
    mysql.execute(query, path=delivered_path, file_id=file_id)

reset

Like deliver, we'll perform three operations — delete the file from S3, reset the status, and send the customer an apology email:

s3.delete_object(
    Bucket = delivered_path.netloc, 
    Key = delivered_path.path.lstrip('/')
)
...
rev=sql.text("UPDATE files SET status=0, delivered_path=NULL WHERE ID=:fid")        
mysql.execute(rev, fid=file_id)
...
gmail.send_email("Whoops!", "Your file will be ready soon!",  customer[0])

show

To index our files, we'll do a simple MySQL scan, joining in customer ID. We could have done this in Retool — but this way we can use CloudWright for logging and tracking:

show=f"SELECT {','.join(fields)} FROM files INNER JOIN customers \
    ON files.customer_id = customers.id"
res = mysql.execute(show)

Setting up an HTTP Trigger

Last but not least, to create our lightweight API, we'll need to set up an HTTP Trigger to let external services invoke the application. We can do this directly through the editor UI:

Trigger Config

We'll need to hold onto two configurations for Retool in the next section:

  • The Endpoint Address. In this example we're using a GCP Deployment Zone, so the URL is of the form https://us-central1-cloudwright-production.cloudfunctions.net/<my_id>. Endpoints for Applications in AWS Zones will have a different base URL.
  • The Authentication Token. To secure the endpoint, we'll need to use Token Authentication and create a token we'll use to authenticate Retool.
Testing it out

With our Modules configured, code written, and Trigger set up, we can use the CloudWright dry-run console to test that our API does what we expect. Plugging in a few test operations, we confirm we can deliver, reset, and list pending deliveries:

Dry Run

After publishing our application, we're ready to head over to Retool to use our API in an App.


Building a Retool application

Note: This article assumes you know the basics of building a Retool App. If you don't, don't worry — they have great docs. We recommend running through their Quickstart guide first.

We'll start by setting up a new Rest API resource in our Retool project. We only need two things to to create the API — the Base URL, and the access token we created for the HTTP Trigger. The token is added as an Authorization header:

Retool Backend

After creating a new Retool App, we'll create three queries against the resource we created — show_files, to populate the table, and deliver_file and reset_file to attach to buttons:

Retool Queries

These queries can be hooked up to tables and buttons like any other Retool API resource. Our main interface will be a table which lists files and shows deliver or reset buttons (depending on the status). The table will be populated by our show_files query, and the buttons supported by the deliver_file and reset_file queries:

Retool Buttons

We can test out the App in our nifty new UI by clicking Deliver and Reset a few times. We can check that our delivery validation logic works by attempting to deliver File 2, an invalid delivery (we've set up file1.txt to be an empty file).

CloudWright provides robust tracking and error handling, so we can track our queries in the CloudWright Run Console in realtime.



The Run Console lets us dig into the details and logs of the backend queries our Retool App created. We want to see details about the failed delivery we attempted a minute ago, so we'll inspect that execution:

Failed Delivery

Exactly what we wanted — our file validation saves us from embarrassing ourselves by sending the customer an empty file.

That's it! With just a few minutes of work, our UI is hooked up to an authenticated, scalable CloudWright API which handles backend operations. We're ready to hand this tool off to our CI team.

Since our CloudWright application is served by best-of-class serverless platforms (GCP Cloud Functions or AWS Lambdas), we get for free all the benefits of a modern cloud deployment:

  • Our API will scale effortlessly
  • Minimal downtime concerns — our API is as stable as the cloud itself
  • Serverless functions don't require dedicated hardware. No costs except when our API is called

Until we want to update our delivery workflow, we can (almost) forget that it exists.

Conclusion

Retool is a great tool for building internal UIs, and CloudWright can make it even more powerful by offloading complex backend operations into robust, easy-to-write, no-maintenance Python APIs.

If you'd like to learn more, see a live demo, or start using CloudWright to turbocharge your own Retool Apps, let's get in touch.