When should I use bulk query? Read on and let's analyse it together.
This blog builds upon my previous blog, Consuming Shopify Product API Example. We'll compare the bulk query with the non-bulk query we did on that example.
Shopify Bulk Query Changes
These are the changes we made to the previous example so we can do Shopify bulk query.
Just a few changes here. We created a new package named service to handle the bulk query operations and a new path to hit to pull the cached products. Quick and easy.
model.go
package service
type Node struct {
ID string `json:"id,omitempty"`
Title string `json:"title,omitempty"`
Handle string `json:"handle,omitempty"`
Vendor string `json:"vendor,omitempty"`
ProductType string `json:"producType,omitempty"`
Tags []string `json:"tags,omitempty"`
Namespace string `json:"namespace,omitempty"`
Key string `json:"key,omitempty"`
Value string `json:"value,omitempty"`
ParentID string `json:"__parentId,omitempty"`
}
type Product struct {
ID string `json:"id,omitempty"`
Title string `json:"title,omitempty"`
Handle string `json:"handle,omitempty"`
Vendor string `json:"vendor,omitempty"`
ProductType string `json:"producType,omitempty"`
Tags []string `json:"tags,omitempty"`
Metafields []Metafield `json:"metafields,omitempty"`
}
type Metafield struct {
Namespace string `json:"namespace,omitempty"`
Key string `json:"key,omitempty"`
Value string `json:"value,omitempty"`
ParentID string `json:"__parentId,omitempty"`
}
This is where all the magic happens. We issue a bulk query request via mutation operation. We then unmarshall the response and check the bulk operation status if it has been created. If it was created, we then poll the current bulk operation until it is completed, canceled, failed, etc. Once it is completed, we download it and save it into a temporary file. This file will be in JSONL (JSON Lines) format, then we will have to parse the file in order to build the product tree.
There is also a webhook way of checking the bulk operation status. It is recommended over polling as it limits the number of redundant API calls. But for the purposes of this example, we'll do polling.
On start up, you should see somethingl like below. Shopify has returned with a download URL. Which means the bulk query has completed and we are ready to serve the cached products.
Comparison with Non-Bulk Query
Now let's compare the new way of pulling data to the old way.
Can you spot the difference? In terms of speed, the response time of the new way was clearly super fast. Just 4ms compared to 279ms, imagine that? What's more is that not only did the old way take longer, it returned less data. It return 639 B compared to 4.65 KB. In other words, in the old way we only received 3 products while in the new way we received all products including metafields. That's an icing on the cake. As for start up time of the app, it was negligible.
Shopify Bulk Query Wrap Up
Would you do bulk query now or not? It is up to you to identify a potential bulk query. Queries that use pagination to get all pages of results are the most common candidates. There are limitations on a bulk query though. For example, you can't pull data that's nested two levels deep. Check the Shopify documentation for more information.
There you have it. Another way to pull product data from the Shopify GraphQL Admin API. If you got a better way of doing things (e.g. how to parse the JSONL better), just raise a pull request. Happy to look at it. Grab the repo here, github.com/jpllosa/shopify-product-api, it's on the bulk-query branch.
Nowadays, it is vital to keep data secure. If your connection to a database goes through an untrusted network, it is prudent to encrypt data going through the wire. Fortunately, we can make use of MySQL's internal SSL (Secure Sockets Layer) support to make the connection secure. Read on and learn how to secure a connection to MySQL in Golang.
SSL is a standard technology for securing internet connection by encrypting data sent between a client and server. SSL and TLS (Transport Layer Security) are sometimes used interchangeably but TLS is an updated, more secure version of SSL. TLS fixes existing SSL vulnerabilities.
Creating SSL Certificates
The Common Name value used for the server and client certificates/keys must each differ from the Common Name value used for the CA certificate. On my Windows 10 machine, I'm using OpenSSL 3.1.2 1 Aug 2023 to generate the below certificates and keys. Create a clean directory and generate the certificates and keys (e.g. mkdir pems).
$ openssl req -new -x509 -nodes -days 3600 -key my-ca-key.pem -out my-ca.pem -addext "subjectAltName = DNS:localhost, IP:127.0.0.1"
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:CA
Email Address []:
Check the contents of the certificate, openssl x509 -in my-ca.pem -text.
$ openssl x509 -in my-ca.pem -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
59:76:52:41:c9:d8:7a:d2:51:2a:05:3a:a6:f9:d8:3e:9d:a3:3d:a3
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = CA
Validity
Not Before: Nov 5 09:49:14 2023 GMT
Not After : Sep 13 09:49:14 2033 GMT
Subject: C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = CA
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:a8:cc:c5:14:d9:ef:90:07:43:81:b1:80:f7:42:
...snipped...
16:75:45:05:69:5a:73:24:b3:f2:93:cb:5f:3b:8f:
31:15
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Key Identifier:
B6:1A:8E:CA:32:F7:AF:A9:35:EF:27:5F:BF:DE:BA:A2:4B:66:F4:39
X509v3 Authority Key Identifier:
B6:1A:8E:CA:32:F7:AF:A9:35:EF:27:5F:BF:DE:BA:A2:4B:66:F4:39
X509v3 Basic Constraints:
CA:TRUE
X509v3 Subject Alternative Name:
DNS:localhost, IP Address:127.0.0.1
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
04:7a:75:59:80:fb:85:58:76:4a:c8:4d:69:d9:d3:72:42:fd:
...snipped...
Create server certificate, key, and sign it. Create extfile.cnf
In this example, I was using MySQL v5.7.21 which did not support the PKCS #8 format for the key so I had to add the -traditional option to make the key in PKCS #1 format. On a later version of MySQL, I didn't need to use the -traditional option (e.g. MySQL v5.7.33). Simply put, for PKCS #1 (Public-Key Cryptography Standard), your key would start with something like --- BEGIN RSA PRIVATE KEY --- and end with --- END RSA PRIVATE KEY ---. For PKCS #8, it's --- BEGIN PRIVATE KEY --- and end with --- END PRIVATE KEY ---. Please google it for more details.
$ openssl req -newkey rsa:2048 -nodes -keyout my-server-key.pem -out my-server-req.pem
.......+.+..............+......+...+..........+...........+....+...+...+.....+.......+..+.+.....+.+.....+......+....+......+..+......+....+....................+.......+.........+..+++++++++++++++++++++++++++++++++++++++*.+.+............+..+...+...+.+...+.....+..................+.........+.+.....+...+.+++++++++++++++++++++++++++++++++++++++*...+.....+....+.....+....+.....+.+..............+.......+...+...+..............+.+..+...............+...+.......+.....+......+.......+...+..+...+.........+..........+.....+.+..............+.......+........................+..+...+....+..+.+........+....+..+.+...+...............+......+........+.......+...........+...+....+...+...+............+..+...+...+.......+........+.+......+............+..+.+.....+....+.....+...+.+..+..................................+......+..+...+....+........+...+.......+........+...................+...........+...+.........+...+.......+...+..+......+.+.....+.+..............+......+.............+.....+.........+............+..........+.........+.........+......+...........+.+..+.+.....+.......+.........+......+...+.....+......+.+..+...+.......+......+.....+.......+.....+..........+..+............+...+.+..+....+......+..+.............+...+.....+......+..........+.........+.........+.....+...+.......+......+.....+......+........................+.+..............+....+..+..................+...+...+.+......+..+.+..+.........+.+..............+...+...+...+.......+........+.+......+.....+...+....+........+....+..................+...+...+..+.......+...+..................+.....+....+.....+...+....+.....+....+..+.......+............+.....+.........+.+......+...+......+...+...+..+.............+........+....+..+.............+..+.............+..+....+.........+..+.........+....+...+..+...+...+.+......+..+.+...........+......+....+...........+...+.......++++++
...+...+..+............+...+..........+...+...........+.+..+............+.+...+...........+.+..+..........+...........+....+.....+.+......+++++++++++++++++++++++++++++++++++++++*........+...+..+...+....+.....+....+......+........+.......+.....+...................+++++++++++++++++++++++++++++++++++++++*........+.......+......+.....+.+.....++++++
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:Server
Email Address []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
$ openssl req -newkey rsa:2048 -nodes -keyout my-client-key.pem -out my-client-req.pem
.....+....+...+......+..+++++++++++++++++++++++++++++++++++++++*...........+...+.+......+.................+.+..+++++++++++++++++++++++++++++++++++++++*..+....+..+.............+.....+......+.+..+.......+......+...............+........+.+.....+.+...+............+..+................+..............+.......+......+.........+.....+...+.......+...+..+.........+.+.....+....+.........+..+...+.......+........+...............+...+...+.+......+........+.+.....+.............+...+..+......+.......+.....+...+.......+..+.......+...............+...........+.......+.....+.+.................+....+........+.............+...+..+...+............+.......+..+......+...+....+...+..+......+.+.....+..........+........+.+..+.......+........+...+......+......+.............++++++
.....+...+.+..+++++++++++++++++++++++++++++++++++++++*...+....+...........+.........+.+.....+.........+....+..+......+++++++++++++++++++++++++++++++++++++++*.+...+.....+......+.............+..+.+..............+.+...+...........+...+...+....+.....+...+.......+...+........+...+.+...+............+...........++++++
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:Client
Email Address []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
$ openssl x509 -req -in my-client-req.pem -days 3600 -CA my-ca.pem -CAkey my-ca-key.pem -set_serial 01 -out my-client-cert.pem
Certificate request self-signature ok
subject=C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = Client
Check the contents of the certificate, openssl x509 -in my-client-cert.pem -text.
$ openssl x509 -in my-client-cert.pem -text
Certificate:
Data:
Version: 1 (0x0)
Serial Number: 1 (0x1)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = CA
Validity
Not Before: Nov 5 09:59:50 2023 GMT
Not After : Sep 13 09:59:50 2033 GMT
Subject: C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = Client
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:a9:f1:74:07:eb:fa:96:d1:0f:c6:f0:32:c6:c6:
...snipped...
At the end of all those commands, you should have something like below in your pems folder. Verify your certificates like below.
$ openssl verify -CAfile my-ca.pem my-server-cert.pem my-client-cert.pem
my-server-cert.pem: OK
my-client-cert.pem: OK
Hooking Up the PEMs to MySQL
Copy my-ca.pem, my-server-cert.pem, and my-server-key.pem to your MySQL data directory. Run MySQL like so mysqld --console --ssl-ca=my-ca.pem --ssl-cert=my-server-cert.pem --ssl-key=my-server-key.pem and you should have something like below. The below tells us that MySQL is able to read our CA certificate and we should be able to do SSL connections.
2023-11-05T22:14:14.749469Z 0 [Note] Plugin 'FEDERATED' is disabled.
2023-11-05T22:14:14.751171Z 0 [Note] InnoDB: Loading buffer pool(s) from D:\mysql-5.7.21-winx64\data\ib_buffer_pool
2023-11-05T22:14:15.205534Z 0 [Warning] CA certificate my-ca.pem is self signed.
2023-11-05T22:14:15.218988Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
2023-11-05T22:14:15.219894Z 0 [Note] IPv6 is available.
2023-11-05T22:14:15.220471Z 0 [Note] - '::' resolves to '::';
2023-11-05T22:14:15.224511Z 0 [Note] Server socket created on IP: '::'.
2023-11-05T22:14:15.301522Z 0 [Note] Event Scheduler: Loaded 0 events
2023-11-05T22:14:15.302306Z 0 [Note] mysqld: ready for connections.
Version: '5.7.21' socket: '' port: 3306 MySQL Community Server (GPL)
2023-11-05T22:14:15.512362Z 0 [Note] InnoDB: Buffer pool(s) load completed at 231105 22:14:15
You can also set SSL to be database wide. Just set the MySQL config file like below. For this exercise we'll try setting SSL connections per user at the moment. When you can successfully connect then you can easily move up to securing it database wide. I'd recommend on a production system to secure it database wide.
You may or may not have to do this bit. For MySQL v5.7.21 on my Windows 10 Pro 19045 build and Go v1.18.3, I didn't have to do this. But on my other Windows 10 machine running MySQL v5.7.33, Go v1.18.2, I had to make the target machine trust the self-signed certificates by manually importing it using Microsoft Management Console (MMC). Start command prompt as Administrator then run mmc. Google how to manually import self-signed certificates for more details. In summary, I had to manually import the certificates under Console Root > Certificates > Personal > Certificates.
Testing the Connection
For this example, I was on MySQL Workbench v6.3. I should be on v8 really but it's on my other machine. Anyway, add your PEM (Privacy Enhanced Mail) files in the appropriate boxes. PEM files contain the public certificate or may include the entire certificate chain including public key, private key and root certificates.
For a successful test connection, you should have something like below:
Secure the User
I'm utilizing my past article, Accessing MySQL using Go Example. Granting you followed that example, then you most likely have a username "golang" that you have used to connect to the database. Let us make golang do a secure connection. Run the SQL query to require SSL on golang
ALTER USER golang@localhost REQUIRE SSL;
To remove SSL on golang, run.
ALTER USER golang@localhost REQUIRE NONE;
Check what TLS version our MySQL v5.7.21 server supports. As you can see below, it only supports TLSv1 and TLSv1.1. Our Go crypto/tls library supports TLSv1.3 by default. We'll need to configure our connection to go down a couple of versions lower. Otherwise we'll have an error, something like this, tls: server selected unsupported protocol version 302.
// ... code snipped...
rootCertPool := x509.NewCertPool()
pem, err := ioutil.ReadFile("../pems/my-ca.pem")
if err != nil {
log.Fatalf("configuration: reading CA pem file: %v", err)
}
if ok := rootCertPool.AppendCertsFromPEM(pem); !ok {
log.Fatalf("configuration: failed to append pem file: %v", err)
}
clientCert := make([]tls.Certificate, 0, 1)
certs, err := tls.LoadX509KeyPair("../pems/my-client-cert.pem", "../pems/my-client-key.pem")
if err != nil {
log.Fatalf("configuration: failed to load key pair: %v", err)
}
clientCert = append(clientCert, certs)
mysql.RegisterTLSConfig("secure", &tls.Config{
RootCAs: rootCertPool,
Certificates: clientCert,
MinVersion: tls.VersionTLS10, //without this defaults tls1.3 which not supported by our mysql
MaxVersion: tls.VersionTLS11,
})
cfg.TLSConfig = "secure"
// ... code snipped ...
fmt.Println("Securely connected!")
rows, err := db.Query("SELECT * FROM album")
if err != nil {
fmt.Errorf("database query: %v", err)
}
defer rows.Close()
fmt.Printf("%2s %15s %15s %6s \n", "ID", "Title", "Artist", "Price")
for rows.Next() {
var alb Album
err := rows.Scan(&alb.ID, &alb.Title, &alb.Artist, &alb.Price)
if err != nil {
fmt.Errorf("row scan: %v", err)
} else {
fmt.Printf("%2d %15s %15s %6.2f \n", alb.ID, alb.Title, alb.Artist, alb.Price)
}
}
The notable difference from the API documentation is the addition of the version support by our MySQL server. On another note, if you followed the Creating SSL Certificates and Keys Using openssl from MySQL's documentation, chances are, you didn't specify a subjectAltName. You'll possibly get an IP SANS error when connecting. I later found out that you can just add a server name in tls.Config and it should connect. Somelike below:
After we are connected, we query the database and print out the records in a neat table format. Running the code above in VS Code via launch.json, you should have something like below:
DAP server listening at: 127.0.0.1:53186
Type 'dlv help' for list of commands.
Securely connected!
ID Title Artist Price
1 Blue Train John Coltrane 56.99
2 Giant Steps John Coltrane 63.99
3 Jeru Gerry Mulligan 17.99
4 Sarah Vaughan Sarah Vaughan 34.98
Process 6508 has exited with status 0
Detaching
Go TLS MySQL Wrap Up
There you have it. Hope you enjoyed reading and trying out the example as much as I did. The complete project can be cloned from github.com/jpllosa/go-relational-database/tree/tls-config. This piece of code is under the tls-config branch.
In this blog, we are going to learn how to pull product data from the Shopify API. To build upon previous examples like Starting Out on the go-chi Framework and GraphQL with Postman, we will implement this in Go using the go-chi framework and hit the Shopify GraphQL endpoint. Let's begin.
User Story
As a user, I want to retrieve my Shopify products in JSON format, so I can use it in my analytics software.
Shopify Setup
First off, I will not go into too much detail in setting Shopify up. I will let the Shopify documentation do that for me. To get this example running, you'll need a Shopify Partner account. When I created my Shopify partner account, it was free and I don't see any reason that it won't be in the future. Once you have a pertner account, create a development store or use the quickstart store with test data preloaded.
Go into your store, then Settings, then Apps and sales channels, then Develop apps and then Create an app.
On the Configuration tab you must have the access scopes write_products and read_products. As of this writing, the webhook version is 2023-10.
On the API credentials tab, take note of your Admin API access token (you'll need to add this in the request header). Keep your access tokens secure. Only share them with developers that you trust to safely access your data. Take note of your API key and secret key as well.
There you have it. We are all set on the Shopify configuration end.
Testing the Shopify API
Before we go straight into coding, let's test the API call first. Fire up Postman and let's hit the GraphQL endpoint. The URL is of the form https://<storename>.myshopify.com/admin/api/<version>/graphql.json. Add X-Shopify-Access-Token in your header and the value is your Admin API access token. We'll do a query for the first 5 products. The image below shows we can happily make a request. Go to GraphQL Admin API reference for more details.
The first thing the program does is load up the configuration. As mentioned above, we are building upon a previous example, Starting Out on the go-chi Framework. We utilize the go-chi framework for routing incoming HTTP reqeusts. Any requests to the root resource will return a status in JSON format. Any requests to the /products resource will return a list of products in JSON format. I will not expand on router.GetStatus because I have explained it in my previously mentioned blog, Starting Out on the go-chi Framework. I just moved it into a new package making it more organized.
Making the configuration file in JSON format will make it easy for us to use. We simply read the file and unmarshal the byte array into our Config type and bob's your uncle. Then we easily get the value accessing the fields as seen in main.go(e.g. fmt.Sprintf(":%v", config.Port)).
JSON Marshaller
json.go
func Marshal(v any) []byte {
jsonBytes, err := json.Marshal(v)
if err != nil {
panic(err)
}
return jsonBytes
}
func Unmarshal[T any](data []byte) T {
var result T
err := json.Unmarshal(data, &result)
if err != nil {
panic(err)
}
return result
}
Nothing special with this file. It is just a wrapper of the standard JSON Go library. Didn't want to muddy the resource handlers with JSON error handling.
Get Products
products.go
// snipped...
type GqlQuery struct {
Query string `json:"query"`
}
type GqlResponse struct {
Data Data `json:"data,omitempty"`
Extensions Extensions `json:"extensions,omitempty"`
}
type Data struct {
Products Products `json:"products,omitempty"`
}
type Products struct {
Edges []Edge `json:"edges,omitempty"`
}
type Edge struct {
Node Node `json:"node,omitempty"`
}
type Node struct {
Id string `json:"id,omitempty"`
Title string `json:"title,omitempty"`
Handle string `json:"handle,omitempty"`
Vendor string `json:"vendor,omitempty"`
ProductType string `json:"productType,omitempty"`
Tags []string `json:"tags,omitempty"`
}
type Extensions struct {
Cost Cost `json:"cost,omitempty"`
}
type Cost struct {
RequestedQueryCost int `json:"requestedQueryCost,omitempty"`
ActualQueryCost int `json:"actualQueryCost,omitempty"`
ThrottleStatus ThrottleStatus `json:"throttleStatus,omitempty"`
}
type ThrottleStatus struct {
MaximumAvailable float32 `json:"maximumAvailable,omitempty"`
CurrentlyAvailable int `json:"currentlyAvailable,omitempty"`
RestoreRate float32 `json:"restoreRate,omitempty"`
}
func GetProducts(config config.Config) chi.Router {
router := chi.NewRouter()
router.Get("/", func(w http.ResponseWriter, r *http.Request) {
productsGql := fmt.Sprintf(`{
products(first: 3, reverse: false) {
edges {
node {
id
title
handle
vendor
productType
tags
}
}
}
}`)
query := GqlQuery{
Query: productsGql,
}
q := marshaller.Marshal(query)
body := strings.NewReader(string(q))
client := &http.Client{}
req, err := http.NewRequest(http.MethodPost, config.Shopify.Endpoint, body)
if err != nil {
panic(err)
}
req.Header.Add("Content-Type", "application/json")
req.Header.Add("X-Shopify-Access-Token", config.Shopify.AccessToken)
resp, err := client.Do(req)
if err != nil {
panic(err)
}
defer resp.Body.Close()
fmt.Println("Response status:", resp.Status)
responseBody, err := ioutil.ReadAll(resp.Body)
if err != nil {
panic(err)
}
gqlResp := marshaller.Unmarshal[GqlResponse](responseBody)
jsonBytes := marshaller.Marshal(gqlResp.Data.Products.Edges)
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(200)
w.Write(jsonBytes)
})
return router
}
Welcome to the heart of this article. We start by creating the type struct of the GraphQL response so we can easily unmarshal it. As you saw earlier in Testing the Shopify API, the Postman image shows a snapshot of the GraphQL response from Shopify. When GetProducts is called, it starts by creating the GraphQL query for products. Marshal the query into a byte array then passes it into a new HTTP request. We create an HTTP client so we can add in the Shopify Access Token in the header and the content type. We then send the request by calling client.Do(req) and read the response. Converting the response into a byte array so we can unmarshal it. After that we strip out the stuff we don't need. We only need the Nodes. Once we have the nodes, we convert it into bytes and send it back. That is all there is to it. Oh yeah, don't forget to close response after using it.
Output
I've set my port to 4000 so hitting http://localhost:4000 will return a response like below:
{
"id": 1,
"message": "API ready and waiting"
}
Hitting http://localhost:4000/products will return a response like below:
There you have it. A way to pull data from the Shopify GraphQL Admin API and spitting it out in a slightly different format. In between we utilized the go-chi framework for routing requests. We used the standard Go library for JSON manipulation, HTTP communication and file access. Lastly, did we satisfy the user story? Grab the full repo here, github.com/jpllosa/shopify-product-api.
What do you hate the most about pull requests? Is it the formatting changes that clearly add no value and just makes the diff confusing? Or a PR that breaks the build? Don't you just hate a PR that breaks the build? In this example we will automate the code review by checking for a broken build when a pull request is opened. How is this done? By the power of GitHub actions.
GitHub Actions
We'll utilize my old GitHub project, github.com/jpllosa/tdd-junit from Test-Driven Development with Spring Boot Starter Test. Clicking on the Actions tab, GitHub will offer GitHub actions if you haven't created any. If you already have an Action then hit the New workflow button. For our example repo, we should see somthing like below:
GitHub Actions Java with Maven
Clicking Configure will create a template maven.yml file in .github/workflows folder as shown below. Editing a yml file in GitHub is also a nice way to search for actions. Commit then pull to have a local copy. Let's create a development branch and a feature branch (github-action-java-maven). What we would like to happen is when a pull request into development, it would automatically trigger a build, test and create a review comment if it's a broken build.
Finding GitHub Actions
$ git pull
From https://github.com/jpllosa/tdd-junit
* [new branch] development -> origin/development
* [new branch] github-action-java-maven -> origin/github-action-java-maven
Already up to date.
$ git checkout github-action-java-maven
maven.yml
Let's edit the maven.yml template. If you're using Spring Tool Suite and can't find your file, you might have to untick *. resources. Then you should be able to see the yml file.
Spring Tool Suite Filter
Spring Tool Suite Java Element Filter
GitHub Workflow YML file
Your yml file should look like below:
name: Java CI with Maven
on:
pull_request:
branches:
- 'development'
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3.6.0
- name: Set up JDK
uses: actions/setup-java@v3.12.0
with:
java-version: '8'
distribution: 'semeru'
cache: maven
- name: Build with Maven
run: mvn -B clean package
- name: View context attributes
if: ${{ failure() }}
uses: actions/github-script@v6.4.1
with:
script: |
console.log(context);
github-token: ${{ secrets.GITHUB_TOKEN }}
debug: true
- name: Create PR Comment
if: ${{ failure() }}
uses: actions/github-script@v6.4.1
with:
script: |
github.rest.pulls.createReview({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: context.payload.pull_request.number,
event: "COMMENT",
body: "It is a mortal sin to PR a broken build! :rage:",
comments: [],
})
github-token: ${{ secrets.GITHUB_TOKEN }}
debug: true
I won't be explaining workflows in too much detail. I'll let the GitHub Docs do that for me. I will however explain this YAML file. Let's start from the top. Our workflow name is Java CI with Maven. This workflow is triggered when a pull request is opened on branch development. Our workflow run is made up of a single job, with a job ID of build. The type of machine that will process our job is the latest version of Ubuntu.
Steps, the place where all the grunt work is done. First is the Checkout. It uses the Checkout GitHub action to checkout our repository. Second it Set up JDK. It uses the Setup Java JDK GitHub action to download and set up Java version 8. Third is Build with Maven which runs the mvn -B clean package command. Fourth is the View context attributes. It uses the GitHub Script GitHub action to help us write scripts in our workflow that uses the GitHub API and the workflow run context. This step is ran when any of the previous steps has failed. The failure() is a status check function. This step is for debugging purposes only, hence the log to the console call. Fifth is the Create PR Comment action. It uses the same GitHub action as the previous one. If the Build with Maven step fails then this will create a review comment saying that it is a mortal sin to create a pull request with a broken build. LOL!
GitHub Actions in Action
Let's add a unit that fails in MathFunTest.java, push it then create a PR. Our source branch is github-action-java-maven and the target development so that the workflow gets triggered.
package com.blogspot.jpllosa;
// snipped...
@SpringBootTest
public class MathFunTest {
// snipped...
@Test
public void testThatFails() {
fail("Not yet implemented");
}
}
What's the outcome? You should see something like below when creating a pull request with a broken build:
Pull Request Review Comment
Remove the failing test then push the code and you should have something like below:
GitHub Actions Success
GitHub Actions Java with Maven Summary
GitHub Actions did all the work for us. From setting up the machine with all the required Java tools. Checking out the repo and then building and running the tests. And finally creating the review comment via the GitHub Script Github action API. There you have it. An automated code review comment for those PRs breaking the build. Hope you had fun reading and applying a similar GitHub action. I know I did.
This article is an example of how to create GraphQL requests with Postman. We will be utilizing the public Star Wars GraphQL API for this practice.
GraphQL
What is GraphQL? graphql.org states that it is a query language for APIs and a runtime for fulfilling those queries with your existing data. It provides a complete and understandable description of the data in your API. Gives clients the power to ask for exactly what they need and nothing more. Makes it easier to evolve APIs over time. Enables powerful developer tools. Did that make sense? Maybe some examples might help.
GraphQL came to be because of the shortcomings of RESTful APIs. One of the issue with a RESTful API is that they overfetch or underfetch data. That means, a single request either gets too much or too little data. Another problem is you might need to hit multpile endpoints just to get the data that you want.
Enter GraphQL. You'll only need one endpoint, supply it with the query language data and receive only the data that you asked for. Another advantage is its type system. This typed schema ensures that we only ask for what's possible and provide clear and helpful error even before we send a request via tools (e.g. Postman).
There it is, a better succuessor to REST.
Configuring Postman
I'm using Postman for Windows v10.17.4. There are two ways of making a GraphQL request in Postman. First is the good old fashioned HTTP request and the other is the GraphQL request. Clicking New, you should be able to see something like below:
GraphQL New
HTTP Way
We'll use the public Star Wars GraphQL API for this example. We'll hit the graph endpoint, https://swapi-graphql.netlify.app/.netlify/functions/index with the query:
query Query {
allFilms {
films {
title
}
}
}
To understand the query in more detail, we'll have to read the Star Wars Schema Reference. But this example is not about that. We requested all the Star Wars film titles. We got back something like below:
GraphQL HTTP
Things to note on the above image. The selected radio button is GraphQL. Next is the Schema Fetched text. This tells us that Postman will do type checking based on the schema and will offer suggestion and syntax checking (red line under the letter d of dir). Even before we send the request, we already know that it will cause an error.
GraphQL Context Suggestion
GraphQL Error Field
GraphQL Way
The GraphQL way is even better. We can create the query just by ticking the checkboxes. Thank you typed schema! Need I say more?
GraphQL Way
We can also do variables. Take note of the id of "A New Hope". We can pass in arguments like so:
GraphQL Way Variable
GraphQL with Postman Summary
So far, we have only touched up on query. We haven't done any mutatation since the free public Star Wars API doesn't support it. But to explain it succinctly, query is fetching data while mutation is writing data. Congratulations! You have now taken the first few steps in using Postman for your GraphQL needs. You have learned to create a request. Add variables to vary your request. All done in just a few minutes. There you have it. A simple example of how to use GraphQL with Postman.
In this article we will demonstrate a React Front-End app served through Spring Boot's embedded Tomcat. Why do it this way? We can serve a React app in different ways (e.g. Apache Web Server, Nginx, etc.). What is so special about serving a React app through Spring Boot's Tomcat? There is nothing special. A reason could be that the development team is already well versed in Spring Boot with Tomcat. They have mastered configuring Tomcat and investing in a new web server (e.g. Nginx) is of little value. Another could be the deployment pipeline is already set up to do Java/Maven deployment (e.g. Jenkins). Lastly, it could be that you are a new hire and the development team already do it that way. That is, we don't have a choice but to support an existing React FE app served through Spring Boot.
Our focus will be on building the React app so that it can be deployed through a Spring Boot fat jar. We will lightly touch on starting a Spring Boot and React apps. It is assumed that the reader has knowledge regarding Node, React, Java, Maven, Spring Boot, etc. The reader should know how to set up the mentioned technology stack.
If you have been reading my past blogs, then the previous paragraphs sound so familiar. That is because I have done a similar thing for an Angular app. Essentially this article is similar to Angular FE Spring Boot Example. But I added in an extra. In my Angular FE Spring Boot Example, we had to do two steps to build the fat jar. In this article, we will build the fat jar in one step. Sounds good? Let's do it!
Building
Same as in my Angular FE Spring Boot Example, there were two ways to build a React app so that it can be deployed as single Spring Boot fat jar. First one is to use the exec-maven-plugin. Second is to use the maven-resources-plugin.
But wait. There's more. We'll be using the maven-resource-plugin in combination with frontend-maven-plugin. With these two plugins working together, we only need to run one command to build the fat jar. This means, we have leveled up from Angular FE Spring Boot Example.
Under the root directory, create a dev folder. This is where all the React stuff goes. Assuming you have Node, Node Version Manager and Node Package Manager ready, run npx create-react-app my-app to set up a web app. We should have something like below:
D:\workspace\react-fe-spring-boot\dev>npx create-react-app my-app
Need to install the following packages:
create-react-app@5.0.1
Ok to proceed? (y) y
Creating a new React app in D:\workspace\react-fe-spring-boot\dev\my-app.
Installing packages. This might take a couple of minutes.
Installing react, react-dom, and react-scripts with cra-template...
Created git commit.
Success! Created my-app at D:\workspace\react-fe-spring-boot\dev\my-app
Inside that directory, you can run several commands:
npm start
Starts the development server.
npm run build
Bundles the app into static files for production.
npm test
Starts the test runner.
npm run eject
Removes this tool and copies build dependencies, configuration files
and scripts into the app directory. If you do this, you can’t go back!
We suggest that you begin by typing:
cd my-app
npm start
Happy hacking!
We follow the bottom two commands (i.e. cd my-app, npm start) and we should have something like below. For more details, head over to Create React App. Thank you Create React App for creating our project. The React app should be available at localhost:3000.
React app on port 3000
Build the React project, npm run build. This will create files in the build directory. Take note of this folder as we will need it in our POM file. We should see something like below after the build:
D:\workspace\react-fe-spring-boot\dev>npm run build
> my-app@0.1.0 build
> react-scripts build
Creating an optimized production build...
Compiled successfully.
File sizes after gzip:
46.61 kB build\static\js\main.46f5c8f5.js
1.78 kB build\static\js\787.28cb0dcd.chunk.js
541 B build\static\css\main.073c9b0a.css
The project was built assuming it is hosted at /.
You can control this with the homepage field in your package.json.
The build folder is ready to be deployed.
You may serve it with a static server:
npm install -g serve
serve -s build
Find out more about deployment here:
https://cra.link/deployment
Maven Bit
We don't need to change any Java code. The key is in the POM. We will copy the build files over to the classes/static folder and then Maven will take care of making the fat jar. Add the maven-resources-plugin like below (make sure you get the diretories right ;) ):
Do a mvn clean package. Now, run java -jar ./target/react-fe-spring-boot-0.0.1-SNAPSHOT.jar and we should be able to see the it running on localhost:8080. It's now being served by Tomcat.
React app on port 8080
Wait? What?! That was 2 steps to build it into a fat jar! What gives?
One Command to Build Them All
To build it all in one command, we'll enlist the help of the frontend-maven-plugin and a minor change on maven-resource-plugin. Before we do that, let's review the goals bound to the package phase, run mvn help:describe -Dcmd=package.
D:\workspace\react-fe-spring-boot>mvn help:describe -Dcmd=package
[INFO] 'package' is a phase corresponding to this plugin:
org.apache.maven.plugins:maven-jar-plugin:2.4:jar
It is a part of the lifecycle for the POM packaging 'jar'. This lifecycle includes the following phases:
* validate: Not defined
* initialize: Not defined
* generate-sources: Not defined
* process-sources: Not defined
* generate-resources: Not defined
* process-resources: org.apache.maven.plugins:maven-resources-plugin:2.6:resources
* compile: org.apache.maven.plugins:maven-compiler-plugin:3.1:compile
* process-classes: Not defined
* generate-test-sources: Not defined
* process-test-sources: Not defined
* generate-test-resources: Not defined
* process-test-resources: org.apache.maven.plugins:maven-resources-plugin:2.6:testResources
* test-compile: org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile
* process-test-classes: Not defined
* test: org.apache.maven.plugins:maven-surefire-plugin:2.12.4:test
* prepare-package: Not defined
* package: org.apache.maven.plugins:maven-jar-plugin:2.4:jar
* pre-integration-test: Not defined
* integration-test: Not defined
* post-integration-test: Not defined
* verify: Not defined
* install: org.apache.maven.plugins:maven-install-plugin:2.4:install
* deploy: org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.947 s
[INFO] Finished at: 2023-08-06T23:04:08+01:00
[INFO] ------------------------------------------------------------------------
First change is to make maven-resources-plugin execute on the generate-sources phase. Change validate like so.
Now, do you know why I had to show you the Maven package phases? During the package phase, the first thing that will be done by the frontend plugin will be to install node and yarn (i.e. validate phase). After that is building the React app, yarn build (i.e. initialize phase). And lastly, resources will copy the files over (i.e. generate-sources phase). Viola! We're ready to run the build again. Remove the built files (e.g. mvn clean or delete target directory, delete the React app build directory) and then do a mvn clean package. You should see something like below. After the build you should be able to run the jar like before.
D:\workspace\react-fe-spring-boot>mvn clean package
[INFO] Scanning for projects...
[INFO]
[INFO] -------------< com.blogspot.jpllosa:react-fe-spring-boot >--------------
[INFO] Building react-fe-spring-boot 0.0.1-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-clean-plugin:3.2.0:clean (default-clean) @ react-fe-spring-boot ---
[INFO]
[INFO] --- frontend-maven-plugin:1.11.3:install-node-and-yarn (install-frontend-tools) @ react-fe-spring-boot ---
[INFO] Installing node version v18.10.0
[INFO] Copying node binary from C:\Users\jpllosa\.m2\repository\com\github\eirslett\node\18.10.0\node-18.10.0-win-x64.exe to D:\workspace\react-fe-spring-boot\dev\my-app\node\node.exe
[INFO] Installed node locally.
[INFO] Installing Yarn version v1.22.17
[INFO] Unpacking C:\Users\jpllosa\.m2\repository\com\github\eirslett\yarn\1.22.17\yarn-1.22.17.tar.gz into D:\workspace\react-fe-spring-boot\dev\my-app\node\yarn
[INFO] Installed Yarn locally.
[INFO]
[INFO] --- frontend-maven-plugin:1.11.3:yarn (build-frontend) @ react-fe-spring-boot ---
[INFO] Running 'yarn build' in D:\workspace\react-fe-spring-boot\dev\my-app
[INFO] yarn run v1.22.17
[INFO] $ react-scripts build
[INFO] Creating an optimized production build...
[INFO] Compiled successfully.
[INFO]
[INFO] File sizes after gzip:
[INFO]
[INFO] 46.61 kB build\static\js\main.46f5c8f5.js
[INFO] 1.78 kB build\static\js\787.28cb0dcd.chunk.js
[INFO] 541 B build\static\css\main.073c9b0a.css
[INFO]
[INFO] The project was built assuming it is hosted at /.
[INFO] You can control this with the homepage field in your package.json.
[INFO]
[INFO] The build folder is ready to be deployed.
[INFO] You may serve it with a static server:
[INFO]
[INFO] npm install -g serve
[INFO] serve -s build
[INFO]
[INFO] Find out more about deployment here:
[INFO]
[INFO] https://cra.link/deployment
[INFO]
[INFO] Done in 8.49s.
[INFO]
[INFO] --- maven-resources-plugin:3.2.0:copy-resources (copy-resources) @ react-fe-spring-boot ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Using 'UTF-8' encoding to copy filtered properties files.
[INFO] Copying 15 resources
[INFO]
[INFO] --- maven-resources-plugin:3.2.0:resources (default-resources) @ react-fe-spring-boot ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Using 'UTF-8' encoding to copy filtered properties files.
[INFO] Copying 1 resource
[INFO] Copying 0 resource
There you have it. Serving a Rect app through Spring Boot's fat jar and embedded Tomcat. Plus building it in one command! Grab the full repo here, github.com/jpllosa/react-fe-spring-boot
Here's an example of how to handle static web resources in Spring. Let us pretend that one of the business requirement of our Angular FE Spring Boot app is A/B testing. A/B testing is a method of comparing two versions of an app or webpage against each other to determine which one performs better. In short, we'll need a control webpage and a variant webpage. For the control webpage, there will be no changes. For the variant webpage, we'll need to inject "ab-testing.js" in the head of our webpage.
We will not be talking about how to do A/B testing. Instead, we will focus on how to inject the script tag in the head of our webpage. Requirements clear enough? Let's begin. Below is our control webpage.
As far as I know, there are two ways to fulfill this business requirement.
First one is to insert the script tag when ngOnInit is called in a Component class.
Second is to use Spring's ResourceTransformer. We will use this method because the script tag is injected server side. Prior to serving the HTML file, it is transformed. The first strategy happens on the client side which is why I think this is a better solution. This also seems better because the "ab-testing.js" is already in the head prior to DOM manipulation by Angular. So let's get to it.
POM
Update the POM file. We need the dependency below for IOUtils.
Create a com.blogspot.jpllosa.transformer package and add the following files below:
package com.blogspot.jpllosa.transformer;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurerAdapter;
import org.springframework.web.servlet.config.annotation.ResourceHandlerRegistry;
@Configuration
public class IndexTransformationConfigurer extends WebMvcConfigurerAdapter {
@Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
registry.addResourceHandler("index.html")
.addResourceLocations("classpath:/static/")
.resourceChain(false)
.addTransformer(new IndexTransformer());
}
}
We create the class above so that we can add a handler when serving the index.html file. Here it specifies that the index.html file from the classpath:/static/ location will have to pass through a Transformer. Other static resources will not be affected. We are not caching the result of the resource resolution (i.e. resourceChain(false)).
package com.blogspot.jpllosa.transformer;
import org.apache.commons.io.IOUtils;
import org.springframework.core.io.Resource;
import org.springframework.stereotype.Service;
import org.springframework.web.servlet.resource.ResourceTransformer;
import org.springframework.web.servlet.resource.ResourceTransformerChain;
import org.springframework.web.servlet.resource.TransformedResource;
import javax.servlet.http.HttpServletRequest;
import java.io.IOException;
@Service
public class IndexTransformer implements ResourceTransformer {
@Override
public Resource transform(HttpServletRequest request, Resource resource, ResourceTransformerChain chain) throws IOException {
String html = IOUtils.toString(resource.getInputStream(), "UTF-8");
html = html.replace("<!-- INJECT_HERE -->",
"<script src=\"//third.party.com/ab-testing.js\"> </script>");
return new TransformedResource(resource, html.getBytes());
}
}
We've kept our transformer simple for this example. The code is readable enough as to what it is doing, isn't it? We are replacing the placeholder text with the script tag. Job done.
Demonstration
If you have read the Angular FE Spring Boot Example then you should know what to do next. In case you have not. I'll reiterate.
Build the Angular project, ng build on dev/my-app. This will create files in the dist directory.
Run mvn clean package on the project root directory. After that, run java -jar ./target/angular-fe-spring-boot-0.0.1-SNAPSHOT.jar
and we should be able to see something like below running on localhost:8080. Is "ab-testing.js" there?