~~~ llm.txt Author: Alfred R. Duarte Domain: https://alfred.ad ~~~ ![_Hueshift Palette Board, **2023**_](/public/photos/bloomhue/hueshift-board.png "Hueshift Palette Board, Alfred R. Duarte 2023") > Design & Product # Hueshift > **March** - **April 2023** 1. [**Purpose**](#purpose) 2. [**Interface Design**](#interface-design) 3. [**User Experience & Testing**](#user-experience-testing) 4. [**Try Hueshift.io 🎨 🖼️**](#try-hueshift.io) ***Hueshift is a tool that helps you generate harmonious color palettes using a recursive machine learning process.*** ![_254 Spectrum Made with Hueshift, **2025**_](/public/photos/bloomhue/254-hueshift-spectrum.png "254 Spectrum, Hueshift, Alfred R. Duarte 2025") Often in design, you'll need to take a [color palette](https://en.wikipedia.org/wiki/Color_scheme "Color scheme – Wikipedia") and define a [set of light to dark steps](https://en.wikipedia.org/wiki/Color_scheme#Quantitative_schemes "Quantitative schemes – Wikipedia") for each color. ![_Warm Palette Extended with Hueshift, **2025**_](/public/photos/bloomhue/palette-to-hueshift-perspective.png "Extended Warm Palette Spectrum, Hueshift, Alfred R. Duarte 2025") In **interface design**, these values can be used to create _depth_ and _dimension_. One color with a **lighter** and **darker** step can be overlaid to produce elements with a monochrome, "_single color_" look. ![_Confirmed Label with 131 Hueshift Palette, **2025**_](/public/photos/bloomhue/hueshift-example-confirmed-label.png "Confirmed Label with 131 Hueshift Palette, Alfred R. Duarte 2025") It helps you **layer related information** in a predictable pattern to how colors **increase & decrease** in **lightness & darkness**. Just one color can touch a range of situations, with a clear separation of information. ![_Grid of Cards with 29 Hueshift Palette, **2025**_](/public/photos/bloomhue/29-hueshift-palette-overlay.png "Grid of Cards with 29 Hueshift Palette, Alfred R. Duarte 2025") This predictability can be especially useful when creating **light & dark themes**. It's easier to balance your main _accent_ shade against your _background_ & _foreground_ colors when you have steps that feel natural in distance. ![_Light & Dark Themes with 222 Hueshift Palette, **2025**_](/public/photos/bloomhue/hueshift-example-light-dark-theme.png "Light & Dark Themes with 222 Hueshift Palette, Alfred R. Duarte 2025") It's also useful when creating subtle shading with a natural feel, while maintaining vibrant readability. ![_Graph – Analog Designs UI Styles Ⅱ, **2023**_](/public/photos/analog-designs/analog-designs-uistyles1-graph.png "Graph – Analog Designs UI Styles Ⅱ, Alfred R. Duarte 2023") I created **Hueshift** as a tool to automate this process. It uses a [two-stage machine learning process](/portfolio/engineering/under-construction/ "UNDER CONSTRUCTION | Alfred R. Duarte | Portfolio") to help you: - **Choose interesting colors that complement each other.** - **Automatically generate a set of light to dark shades for each color.** **_There are no high-fidelity mockups for this project. Hueshift was designed & built directly in React._** #### Tools used: - [VS Code](https://code.visualstudio.com/) - [React](https://react.dev/) - [Tailwind CSS](https://tailwindcss.com/) - [Affinity Designer](https://affinity.serif.com/en-us/designer/) (Figure Diagrams & Example References) - [Hueshift](https://hueshift.io/) (Color Palettes for Figure Diagrams) ## Purpose What I found lacking in similar tools are the methods of generating **harmonious colors**. Most tools are limited to either [mathematical methods](https://en.wikipedia.org/wiki/Color_scheme#Harmonious_schemes "Harmonious schemes – Color scheme – Wikipedia"), or _manual selection_. I wanted to create a tool that bridged the gap between mathematical precision and the organic artistry of human-selection. **Hueshift** recursively generates harmonious colors starting from the **base hue**. This method allows for a wider range of generated palettes, with colors that still feel naturally harmonious. ## Interface Design **Hueshift** takes on an interface that is purposefully _form_ over _function_. In a world where UI is reduced to a single chat box, I wanted to take a different approach. I wanted to imagine an interface that is as **engaging & artistically inspiring** as it is **functional to use**. ### Inspiration Inspiration for the **palette board** came from those wall of swatches inside real-world paint stores. ![_Wall of Swatches, **GPT-4o 2025**_](/public/photos/misc/swatch-board.png "an image of a wall of swatches at a paint store, iphone shot") The **palette board** attempts to capture the dynamic nature of a real-world swatch wall in a paint store. ![_Hueshift Compact Palette Board Close-up, **2023**_](/public/photos/bloomhue/hueshift-board-compact-closeup.png "Hueshift Compact Palette Board Close-up, Alfred R. Duarte 2023") ### Features **Hueshift** is highly interactive. You can click to copy any **shade** as a **hex code**. You can **drag & drop** rows of swatches to reorder them or toss them out. ![_Swatch Drag & Drop on Hueshift Palette Board, **2023**_](/public/photos/bloomhue/hueshift-board-drag-drop-swatch.png "Swatch Drag & Drop on Hueshift Palette Board, Alfred R. Duarte 2023") Using the **sliders**, you can adjust the **hue** of the **base color**. You can also adjust palette-wide **lightness & saturation**. You can generate up to **`8` harmonious color swatches** in a single palette. ![_Hueshift Sliders, **2023**_](/public/photos/bloomhue/hueshift-sliders.png "Hueshift Sliders, Alfred R. Duarte 2023") Enable **Greytone** to generate a **greyscale** palette. ![_Hueshift Greytone Palette, **2025**_](/public/photos/bloomhue/hueshift-greytone-palette.png "Hueshift Greytone Palette, Alfred R. Duarte 2025") You can copy rows of swatches as **JSON**, or copy the entire palette as **JSON**. Click the **`Palettes ⬇️`** button to download a `palette.json` file of all your starred palettes. ```json { "50": "#f2fdfc", "100": "#cbfbf6", "200": "#98f6ef", "300": "#5deae6", "400": "#2bd4d2", "500": "#14b6b8", "600": "#0d9196", "700": "#0f7175", "800": "#115a5f", "900": "#134b4e", "950": "#042a2f", "hue": 180 } ``` With **Hueshift Pro**, you can edit the colors used to influence the **model** that generates _harmonious colors_. ![_Hueshift Model Selector, **2023**_](/public/photos/bloomhue/hueshift-models.png "Hueshift Model Selector, Alfred R. Duarte 2023") If you feel like shades generated are "missing something", you can add or remove colors to influence the **model** to generate entirely new shades and palettes. **Hueshift** ships with **_`98`_** pre-defined **color models**, with `21` **color models** enabled by default. It also ships with **_`19`_** **greytone models**, with `5` **greytone models** enabled by default. ## User Experience & Testing I conducted light **user testing** with some designer & marketing friends to see how they used **Hueshift** and their thoughts on the process provided by the tool. ![_Hueshift Palette Board Close-up, **2023**_](/public/photos/bloomhue/hueshift-board-closeup.png "Hueshift Palette Board Close-up, Alfred R. Duarte 2023") ### Testing Process Feedback was collected through various forms, mainly **text messaging** and **Discord**. With a small test group, I wanted to meet people where they were. I sent around a survey as text that people could easily reply to rather than ask them to fill out a form. I provided an open space for suggestions & comments. I sent them a succinct set of targeted questions to understand their usage patterns: 1. **Did you produce any palettes that you used, or would use in a project?** 2. **Was there any point you said, "_I hope/wonder if Hueshift could do this?_" What were trying to do?** 3. **What did you find cumbersome about the process or interface?** 4. **What reasons hold you back from using Hueshift in your workflow?** ### Feedback Summary > _"...really fun to play with!"_ Through this research, the feature for **dragging a row of swatches out of the palette to remove it** surfaced. Many people didn't understand how to get rid of swatches that they didn't want, and just wanted to "toss them out". I also received feedback from basically everyone for one feature: > _"Can I generate a palette from an image?"_ While this was out of the project scope, it's a great suggestion for a future release. ### Conclusions In all, most found the process mostly intuitive and easy to navigate. However, no designers really mentioned they'll actually use the tool. The honest feedback I received was picking colors manually in-app was just quicker and easier than using a totally separate tool. With that said, my marketing friends were very excited about its potential, but it was still just a bit too hands-on for them. I would like to revisit this project and incorporate the feedback I received. I think making the process even more streamlined would help increase the tool's usability and get people to actually adopt it. ## Try Hueshift.io 🎨 🖼️ Embedded below ⬇️ or at [beta.hueshift.io](https://beta.hueshift.io "Hueshift.io") (_best viewed in a separate tab_). #### Tips: - Click the > > **`Help`** button to open the **Help Guide**. - Switch your system color scheme to see **light** and **dark** modes. - Click **`Upgrade to Pro`** to unlock the **Full Version** of the app (it's free). @[115%](https://beta.hueshift.io/) --- ### [📚 **Book a meeting to discuss your project. [➔](mailto:alfred.r.duarte@gmail.com)**]{.highlight} [**alfred.r.duarte@gmail.com**](mailto:alfred.r.duarte@gmail.com "Gmail – Alfred R. Duarte") . ~~~ The above material is owned by the author. This file was generated with SASHA. ~~~ ~~~ llm.txt Author: Alfred R. Duarte Domain: https://alfred.ad ~~~ ![_iOS 6-style Emoji Grid, **2022**_](/public/photos/spaceboy3000/ios6-emoji-grid.png "Emoji Grid, Alfred R. Duarte 2022") > Case Study — Design # Color Emoji (iOS 6.0) > **June 2022** 1. [**Secrets of Emoji Design**](#secrets-of-emoji-design) 2. [**Anatomy of Emoji**](#anatomy-of-emoji) 3. [**Dissection & Outlines**](#dissection-outlines) 4. [**Comparisons & Close-ups**](#comparisons-close-ups) 5. [**Side-by-Side Comparison**](#side-by-side-comparison) ***I remade the original emoji set in 2022 as vector-based emoji.*** I've always loved **Apple**'s [icon design](https://developer.apple.com/design/human-interface-guidelines/icons "Apple Human Interface Guidelines: Icons"). Their depth and precision is truly inspiring. The set of original **iOS 6.0** emoji are timeless & iconic. Their style set the standard for emoji design. I wanted to take a look and see how the original emoji were made. If you try to blow emoji up to a large size, you'll find they're just images. The [original designers have shared](https://blog.emojipedia.org/who-created-the-original-apple-emoji-set/ "Who Created The Original Apple Emoji Set? – Emojipedia") that they were digital paintings, but I wanted to see what I could do using vector alone. Recently, I had just completed a similar [case study on pushing the limits of vector shapes & gradients for UI design](/portfolio/design/render-optimized-skeumorphic-ui-2022/ "Case Study: Render–Optimized Skeumorphic UI, 2022 | Alfred R. Duarte | Portfolio"). That project equipped me with very delicate _shading_ & _composition_ techniques using only vector. I wanted to see how those same techniques could apply to the depth & detail of emoji design. **_All of the emoji I created in this project are purely vector-based._** > _**Disclaimer**: I am **not affiliated** with Apple. Emoji referenced in comparisons are **unmodified** and used for educational purposes only. Close-ups & breakdowns are all my own work._ #### Assets produced: **`56`** **Expressions** - **`5`** **Heads** - **`33`** **Eyes** - **`18`** **Eyebrows** - **`29`** **Mouths** - **`3`** **Teeth** - **`3`** **Tongues** - **`2`** **Cheek Blushes** - **`1`** **Chin** - **`12`** **Accessories** **`21`** **Bonus** - **`4`** **People** - **`1`** **Heart** - **`16`** **Symbols** #### Tools used: - [Affinity Designer](https://affinity.serif.com/en-us/designer/) ## Secrets of Emoji Design ![_[Sun](https://emojipedia.org/apple/ios-6.0/sun "Sun on Apple iOS 6.0 – Emojipedia") & [Surprised](https://emojipedia.org/apple/ios-6.0/face-with-open-mouth "Surprised Face with Open Mouth on Apple iOS 6.0 – Emojipedia") Emoji Overlap, **Apple Color Emoji (iOS 6.0) 2012**_](/public/photos/spaceboy3000/emoji-overlap-sun-surprised.png "Sun & Surprised Emoji Overlap, Apple 2012") _Which came first_–the **_sun_** or the **_surprised face_**? I was browsing the original emoji set looking for strong reference candidates. I noticed the shading for the body of the **sun** emoji was nearly identical to the body of the **emoji face**/**head**. Rather than attempt an emotive emoji, I decided to use the **sun** emoji as my starting reference for breaking down the anatomy of the **emoji face**. While I can't unmask specifics around my techniques, I can share about the principles I used and my process behind breaking down **Apple**'s emoji design. ### Light & Scene Apple designers carry a strong understanding of _physical-based_ **light** & **scene composition**. ![](/public/photos/spaceboy3000/emoji-scene-composition.png "Emoji Scene Composition Diagram, Alfred R. Duarte 2025") Emoji take on _studio-quality lighting_, likely from a **3-point lighting setup** (not pictured). A strong **light source** seems to be positioned vertically above the emoji, casting a strong _highlight_ on subjects. **Fill** & **back lighting** are used to create a sense of depth, reducing contrasting shadows while maintaining vibrancy. #### **_A Word on the Physical Properties of Light_** The **light spectrum** is a range of _wavelengths_. These are broken into `3` categories: - **Infrared** wavelengths, low energy, longer than red; - **Visible** wavelengths, medium energy, between red and blue; - **Ultraviolet** wavelengths, high energy, shorter than blue. Notice the colors: **red** & **blue**. When you shift colors, imagine the wavelengths of light. You're shifting more 🔴 **red**, or more 🔵 **blue**. 🔴 **Red** is lower energy and _darker_. 🔵 **Blue** is higher energy and _brighter_. ### Object Composition As previously mentioned, emoji were originally digital paintings. Objects don't really follow [shape-building principles](https://helpx.adobe.com/illustrator/using/building-new-shapes-using-shape.html#shape-builder "Build new shapes with Shaper and Shape Builder tools – Adobe Illustrator Help") like we're used to seeing from **Apple**'s iconography. With that said, their emoji are still highly compositional, with mostly-geometric shapes used to build objects. ![_Sun Emoji Vector & Outlines Dissection, **2025**_](/public/photos/spaceboy3000/emoji-dissection-sun.png "Sun Emoji Dissection, Alfred R. Duarte 2025") **Apple** designers likely **painted onto flat shapes** to create depth and simulate light. They likely used a combination of **layer effects** as well as the trusty **brush tool** to create the final look. My own technique reuses shape layers with gradients to construct depth out of as few vertices as possible. This to offload processing required by the **CPU** and relying on shaders to push intensive rendering work on the **GPU**. ### Edges, Faces, & Bodies In typical **Apple** style, much focus is placed on **edge definition**. **Edges** capture _shape_ & _form_ to create their signature sense of depth. **Edges** & **faces** catch light to build _definition_. **Faces** mold form and express _physicality_. These are material-based properties, like _specular_, _diffuse_, _ambient_; to portray things like _roughness_, _glossyness_, etc. **Bodies** cast shadow and fill light-space for a sense of _volume_. ![_Sun Emoji Close-up, **2022**_](/public/photos/spaceboy3000/emoji-closeup-sun.png "Sun Emoji Close-up, Alfred R. Duarte 2022") ### Blending & Gradients The key to achieving the classic **Apple** emoji look is **blending**. Understanding how [physical light sources interact](#a-word-on-the-physical-properties-of-light) with materials and with one another is key. ![_Fire Emoji Close-up, **2022**_](/public/photos/spaceboy3000/emoji-closeup-fire.png "Fire Emoji Close-up, Alfred R. Duarte 2022") **Strong composition** lays the foundation for vibrant _blending_ work. The classic approach of using a **base**, with **layers of shading** to create _depth_ is how you produce strong results. Take extra care of how you shape **edges** and **faces** against the scene lighting. ![_Fire Emoji Base Gradient & Shading Dissection, **2025**_](/public/photos/spaceboy3000/emoji-dissection-fire.png "Fire Emoji Base Gradient & Shading Dissection, Alfred R. Duarte 2025") ## Anatomy of Emoji Emoji rely heavily on strong principles of **light** & **blending**. **Lines** take _shape_ and mold _form_ into the face base. **Gradients** capture depth from the light cast on the face surface. Emoji can be broken into two main parts: **face** & **expression**. ### Emoji Face Anatomy The **face** is the foundation for all emotive emoji. ![](/public/photos/spaceboy3000/emoji-face-anatomy.png "Emoji Face Anatomy Diagram, Alfred R. Duarte 2025") The emoji **face** is split into `5` layers: 1. **Highlight** 2. **Flush** 3. **Border** 4. **Base** 5. **Shadow** This anatomy is identical for all emotive emoji. This includes the [😡 **Enraged Face**](https://emojipedia.org/apple/ios-6.0/pouting-face "Enraged Face on Apple iOS 6.0 – Emojipedia") emoji, which takes on a full-face **flush**. The **base** & **border** are only recolored for the two [😈 **Smiling Face with Horns**](https://emojipedia.org/apple/ios-6.0/smiling-face-with-horns "Smiling Face with Horns on Apple iOS 6.0 – Emojipedia") & [👿 **Angry Face with Horns**](https://emojipedia.org/apple/ios-6.0/angry-face-with-horns "Angry Face with Horns on Apple iOS 6.0 – Emojipedia") devil emoji. For the original set, the only time the **face** changes form is for the [😱 **Face Screaming in Fear**](https://emojipedia.org/apple/ios-6.0/face-screaming-in-fear "Face Screaming in Fear on Apple iOS 6.0 – Emojipedia") emoji. It takes the shape of the [👽 **Alien**](https://emojipedia.org/apple/ios-6.0/alien "Alien on Apple iOS 6.0 – Emojipedia") emoji, however, its anatomy remains the same. ### Emoji Expression Anatomy **Expressions** are built on top of the **face** anatomy. ![](/public/photos/spaceboy3000/emoji-anatomy.png "Emoji Expression Anatomy Diagram, Alfred R. Duarte 2025") An emoji **expression** consists of `5` layers: 1. **Accessories** 2. **Eyebrows** 3. **Eyes** 4. **Mouth** 5. **Face** **Expressions** reuse a set of _anatomy_, and combine other emoji _anatomy_ to create unique **expressions**. Many _eyebrows_, _eyes_, & _mouths_ are shared across multiple emoji. ![_Selection of Emoji with Reused Anatomy, **2022**_](/public/photos/spaceboy3000/emoji-closeup-reused-anatomy.png "Selection of Emoji with Reused Anatomy, Alfred R. Duarte 2022") ## Dissection & Outlines My technique for creating these emoji is purely **vector-based**. Everything is built out of **shape layers** with **gradients**. There are **_no_** _layer effects_ or _masks_ used in this project. ![_Smiling Face Emoji Vector+Outlines & Outlines Dissection, **2025**_](/public/photos/spaceboy3000/emoji-dissection-smiling.png "Smiling Face Emoji Dissection, Alfred R. Duarte 2025") I didn't use any brush-tricks or vectorization tools. Everything was either built from **geometric shapes** or by using the **pen tool**. ![_Hundred Points Emoji Outlines, **2025**_](/public/photos/spaceboy3000/emoji-outlines-hundred.png "Hundred Points Emoji Outlines, Alfred R. Duarte 2025") There may be inconsistencies in the vertex placements on the outlines above due to the technique used to render the vertices. I split the line into individual paths, then used `open square` **line-ends** to create the vertices. ## Comparisons & Close-ups Below are some close-up comparisons of my emoji with **Apple**'s iOS 6.0 emoji. A small reminder that my emoji are **vector-based** and **Apple**'s are **pixel-based**. ![](/public/photos/spaceboy3000/emoji-comparison-alien.png "Alien Emoji Comparison; Apple 2012, Alfred R. Duarte 2022") ![](/public/photos/spaceboy3000/emoji-comparison-crying.png "Crying Emoji Comparison; Apple 2012, Alfred R. Duarte 2022") ![](/public/photos/spaceboy3000/emoji-comparison-angry.png "Enraged Emoji Comparison; Apple 2012, Alfred R. Duarte 2022") ![](/public/photos/spaceboy3000/emoji-comparison-finger.png "Index Pointing Up Emoji Comparison; Apple 2012, Alfred R. Duarte 2022") ![](/public/photos/spaceboy3000/emoji-comparison-party.png "Party Popper Emoji Comparison; Apple 2012, Alfred R. Duarte 2022") ![_Ghost Emoji Close-up, **2022**_](/public/photos/spaceboy3000/emoji-closeup-ghost.png "Ghost Emoji Close-up, Alfred R. Duarte 2022") ![_Bomb Emoji Close-up, **2022**_](/public/photos/spaceboy3000/emoji-closeup-bomb.png "Bomb Emoji Close-up, Alfred R. Duarte 2022") ![_Smiling Face with Sunglasses Emoji Close-up, **2022**_](/public/photos/spaceboy3000/emoji-closeup-sunglasses.png "Smiling Face with Sunglasses Emoji Close-up, Alfred R. Duarte 2022") ## Side-by-Side Comparison %[iOS 6.0 Emoji Grid, Apple 2012](/public/photos/spaceboy3000/ios6-emoji-grid-apple.png) %[iOS 6.0 Emoji Grid, Alfred 2022](/public/photos/spaceboy3000/ios6-emoji-grid-alfred.png) > Use the slider above to compare my emoji (_left_) against the original **Apple iOS 6.0** emoji (_right_). For only using _vector shapes_ & _gradients_–I think I got pretty close! Looking back at them now, I see a few small things besides shading differences: - The teeth should take on a shadow from the top lip; - The devils' eyebrows & eyes should be touching; - And the flushed-blue faces should have dark-blue eyebrows. _Soo_ close! 😄 The sizes of the original **Apple** 😩 & 😫 emoji are slightly larger than all others for some reason–likely to accommodate their larger mouths. I left mine intentionally all an identical size. In all, this was a fun project! My goal wasn't to perfectly recreate the original emoji, just to see how far pure-vector could go. In the end, I learned a lot from **Apple**'s emoji design and got to see how far I could push my vector & gradient skills. 🖼️ ## Resources - [**Apple Color Emoji (iOS 6.0)**, _Emojipedia_](https://emojipedia.org/apple/ios-6.0 "Apple Color Emoji (iOS 6.0) – Emojipedia") ##### Apple Color Emoji (iOS 6.0) Referenced - [☀️ **Sun**, _Emojipedia_](https://emojipedia.org/apple/ios-6.0/sun "Sun on Apple iOS 6.0 – Emojipedia") - [😮 **Face with Open Mouth**, _Emojipedia_](https://emojipedia.org/apple/ios-6.0/face-with-open-mouth "Face with Open Mouth on Apple iOS 6.0 – Emojipedia") - [👽 **Alien**, _Emojipedia_](https://emojipedia.org/apple/ios-6.0/alien "Alien on Apple iOS 6.0 – Emojipedia") - [😭 **Loudly Crying Face**, _Emojipedia_](https://emojipedia.org/apple/ios-6.0/loudly-crying-face "Loudly Crying Face on Apple iOS 6.0 – Emojipedia") - [😡 **Enraged Face**, _Emojipedia_](https://emojipedia.org/apple/ios-6.0/pouting-face "Pouting Face on Apple iOS 6.0 – Emojipedia") - [☝️ **Index Pointing Up**, _Emojipedia_](https://emojipedia.org/apple/ios-6.0/index-pointing-up "Index Pointing Up on Apple iOS 6.0 – Emojipedia") - [🎉 **Party Popper**, _Emojipedia_](https://emojipedia.org/apple/ios-6.0/party-popper "Party Popper on Apple iOS 6.0 – Emojipedia") --- ### [📚 **Book a meeting to view the .afdesign files. [➔](mailto:alfred.r.duarte@gmail.com)**]{.highlight} [**alfred.r.duarte@gmail.com**](mailto:alfred.r.duarte@gmail.com "Gmail – Alfred R. Duarte") ~~~ The above material is owned by the author. This file was generated with SASHA. ~~~ ~~~ llm.txt Author: Alfred R. Duarte Domain: https://alfred.ad ~~~ ![_Sample UIs 1-7 on Desktop – Analog Designs UI Styles Ⅰ, **2022 & 2025**_](/public/photos/analog-designs/analog-designs-ui-styles-i-ui-1-7-desktop.png "Sample UIs 1-7 on Desktop – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022 & 2025") > Case Study — Design # Render–Optimized Skeumorphic UI > **October** - **December 2022** 1. **[Purpose](#purpose)** 2. **[Process](#process)** 3. **[Fantasy Plugins](#fantasy-plugins)** 4. **[Symbols](#symbols)** ![_UI 1 – Analog Designs UI Styles Ⅰ, **2022**_](/public/photos/analog-designs/analog-designs-ui-styles-i-ui1-preview.png "UI 1 – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022") _**All of the interface assets I created in this project are purely vector-based**, excluding some image-based textures._ #### 🎚️ Assets produced: **`282`** **Assets** - **`30`** **Panels** - **`38`** **Knobs** - **`14`** **Sliders** - **`26`** **Buttons** - **`58`** **Accessories** - **`11`** **Textures** - **`5`** **Fonts** **`225`** **Symbols** #### Tools used: - [Affinity Designer](https://affinity.serif.com/en-us/designer/) ## Purpose A **music plugin** is an interface users interact with to control parameters of an **audio processing pipeline**. Important to note, the interface is just a skin. The _audio processor_ is the main consumer of resources. This means interfaces need to be lightweight and mindful of performance overhead. _Audio processing_ happens in realtime on the **CPU**. Plugin UIs need to offload as much rendering as possible to the **GPU**. In practice, this means reducing the amount of **draw calls** & **vertex buffers** (typically w/ _subranging_ & _instancing_) processed by the **CPU**. For example, reallocating one (large) **VBO** vs. updating separate **VBOs** for each plugin window instance can free up **CPU**. Rather than your **CPU** processing each instance (`O(n)`), it can process a single **VBO** for all instances and upload it to the **GPU** (`O(1)`). Optimizations like this are crucial when you rely on individual-frame performance optimizations. Each frame of processing (both _audio_ & _UI_) needs to happen in as little time as possible. Otherwise, you risk glitching & artifacts in your audio output. That's what the **buffer size** on your **audio interface** is for–adjusting the amount of audio samples processed per frame. A larger buffer size means more samples are processed, but at the expense of latency (the audio playing back later than it appears on the screen). A **music plugin host** dispatches each chunk of samples at a consistent interval (the **buffer size**). If a frame stalls or exceeds its buffer period, the following frame will not start in time and underrun the buffer. ``` ╭< Frame 1 > ╭< Frame 2 > ╭< Frame 3 > ├ ✓ 10ms ├ ⨯ 20ms ├ ✓ 10ms │ Pass │ Underrun → │ Artifacts ``` ### Rendering Options For rendering music tooling interfaces in software, you have a few options: 1. **Image textures (e.g. PNG)** 2. **Vector-based renderer** 3. **3D-based renderer**, (_can include shading pipelines_) **Image textures** are the most common, being the easiest and least resource intensive. You get detailed designs without the overhead. They have their obvious limitations. Fixed sizing is a major issue for distribution across the wide array of desktop screen sizes & resolutions. (_Looking at you, ultrawide screen users._) To scale the window for an **image texture**-based UI, you need to include different sizes of all your images for each target window scale. Otherwise, elements of your UI will look pixelated/blurry when scaled up. Another slowdown is the workflow for rendering animation sequences. You have to render _each frame_ as an image, then assemble it as an **animation strip**. There are [many](https://navelpluisje.github.io/figma-knob-creator "Figma Knob Creator | A Figma plugin for creating knob stacks") [tools](https://tripletechaudio.com/products/knob-maker-for-vst/ "Knob Maker - TripleTech") that can help automate this process. I need to mention the web version of the classic [KnobMan by g200kg](https://www.g200kg.com/en/webknobman/index.html?f=3p_wedge.knob&n=2 "WebKnobMan Knob Designer | g200kg Music & Software") that has been used to render knobs & sliders for countless music plugin UIs throughout the decades. ![_Knob 22 Animation Sequence – Analog Designs UI Styles Ⅰ, **2022 & 2025**_](/public/photos/analog-designs/analog-designs-ui-styles-i-knob22-sequence.png "Knob 22 Animation Sequence – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022 & 2025") **Vector-based renderers** solve the sizing issue, but can incur significant performance overhead with the number of moving/animated parts in a music plugin UI. Lots of vector-based plugins simplify graphical complexity to make up for the increase in overhead, and craft unique stylized looks. ![_[FabFilter Pro-Q 4](https://www.fabfilter.com/products/pro-q-4-equalizer-plug-in "FabFilter Pro-Q 4 - Equalizer Plug-In") Plugin UI Screenshot, **FabFilter 2025**_](/public/photos/misc/fabfilter-pro-q-4.jpg "FabFilter Pro-Q 4 Plugin UI Screenshot, FabFilter 2025") The cost of **3D-based renderers** both to develop & create assets for, plus the overhead of rendering in realtime, means most music plugins don't use them. Instead, plugins with 3D-based elements just render static images and animate the image sequences. Some plugins blend rendered elements with vector-based interactive elements. This way, continuous animation is smooth without most of the overhead cost. ![_[Arturia Jup-8 V](https://www.arturia.com/products/software-instruments/jup-8-v/overview "Arturia - Jup-8 V") Plugin UI Screenshot, **Arturia 2025**_](/public/photos/misc/arturia-jup-8-v.png "Arturia Jup-8 V Plugin UI Screenshot, Arturia 2025") ### Project Goals The goal of this project was to develop a set of **vector-based techniques for rendering skeumorphic music plugin UIs**. The idea is that the UI would be drawn: - **Once (`1`) when the plugin window is first opened/drawn** - **When the user resizes the plugin window**, (_continuous_ or _throttled_/_debounced_) This ensures the UI can be truly responsive–and always drawn perfectly to scale–while reducing overhead. ![_Sample UI 1 Close-up – Analog Designs UI Styles Ⅰ, **2022 & 2025**_](/public/photos/analog-designs/analog-designs-ui-styles-i-ui1-closeup.png "UI 1 Close-up – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022 & 2025") _Localized areas_ are then redrawn as needed for: - **Parameter changes** (e.g. user adjusts a knob) - **State changes** (e.g. user hovers over a button) - **Animation** (e.g. spectrum analyzer graph) Animation for controls like knobs & sliders can be **fully continuous** and still stay lightweight. This eliminates the need for prerendered animation assets, constructing animation strips, and prerendering multiple sizes of the same asset for each target window scale. **Flat-shading** is the final component to reducing **draw calls** & **vertex buffers** processed by the **CPU**. No _lighting_ (faster frag shaders), no _normals_ (even faster), no _specular_; _zero physically-based rendering_. Just flat colors & gradients. Even shadows are flat gradients. ## Process I collected a set of **`57` music interface references**, ranging from vintage hardware to modern software. I analyzed common controls, the styles used to convey functionality, and user experience patterns between interfaces. 'Lotta knobs. ![_Reference Interfaces, **Google Images 2022**_](/public/photos/analog-designs/analog-designs-ui-styles-i-references.png "Reference Interfaces, Google Images 2022") Each control follows a predictable layer architecture. For each control type–knobs, sliders, & buttons–I constructuced reusable architecture for building new controls. Below is an example of **`Knob 22`** and its structure. ![_Knob 22 Anatomy – Analog Designs UI Styles Ⅰ, **2022 & 2025**_](/public/photos/analog-designs/analog-designs-ui-styles-i-knob-anatomy.png "Knob Anatomy – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022 & 2025") The two bottom layers, **Ring Notches** & **Base Border**, are static. The two top layers, **Top Face** & **Indicator**, can be animated depending on the design of the control. Certain designs may only need to animate the **Indicator** **CPU**-side, animating gradients **GPU**-side. A key aspect to these techniques is **minimizing the amount of vector data** processed by the **CPU**. The lower the amount of animated vertices, the lower the **CPU** processing. The lower the amount of vertices alltogether, the greater the amount of window instances that can be drawn together. ![_Knob 22 Vector & Outlines Dissection – Analog Designs UI Styles Ⅰ, **2022 & 2025**_](/public/photos/analog-designs/analog-designs-ui-styles-i-knob22-dissection.png "Knob 22 Dissection – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022 & 2025") Vertex counts can rise slightly, as long as the high-vertex element either remains static (only drawn on window context change) or can be offloaded & animated **GPU**-side. Remember our knob animation from earlier in the article. Rather than continuous animation, costing calculations for each frame of rotation animation, a _lookup table_ can be used to store values used in the animation. Controls with discrete/stepped values can take this a step further and reduce the number of intervals stored in the LUT. ![_Knob 22 Animation Sequence Breakdown – Analog Designs UI Styles Ⅰ, **2022 & 2025**_](/public/photos/analog-designs/analog-designs-ui-styles-i-knob22-sequence-breakdown.png "Knob 22 Animation Sequence Breakdown – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022 & 2025") In fact, you can just precompute curve values into a LUT and reuse for drawing each unchanged frame of animation. Design choices can close the loop and further reduce processing involved. You don't need your **CPU** to calculate vertices of a perfect circle when your **GPU** can do it. A perfect circle doesn't need a LUT for rotation values. The combination of these techniques can reduce **CPU** load for skeuomorphic-style interfaces, while still allowing for high-quality, responsive, & scalable interfaces. ![_Sample UI 1 Vector & Outlines Dissection – Analog Designs UI Styles Ⅰ, **2022 & 2025**_](/public/photos/analog-designs/analog-designs-ui-styles-i-ui1-dissection.png "UI 1 Dissection – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022 & 2025") ## Fantasy Plugins Below are a handful of fantasy music plugin UIs I threw together in a few hours. None of the interfaces below were orignally designed as a cohesive unit–they were all assembled using separate assets from this set. All of the interfaces below are fully vector-based. ![_Sample UI 1 – Analog Designs UI Styles Ⅰ, **2022**_](/public/photos/analog-designs/analog-designs-ui-styles-i-ui1.png "Sample UI 1 – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022") ![_Sample UI 2 – Analog Designs UI Styles Ⅰ, **2022 & 2025**_](/public/photos/analog-designs/analog-designs-ui-styles-i-ui2.png "Sample UI 2 – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022 & 2025") ![_Sample UI 3 – Analog Designs UI Styles Ⅰ, **2022 & 2025**_](/public/photos/analog-designs/analog-designs-ui-styles-i-ui3.png "Sample UI 3 – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022 & 2025") ![_Sample UI 4 – Analog Designs UI Styles Ⅰ, **2022 & 2025**_](/public/photos/analog-designs/analog-designs-ui-styles-i-ui4.png "Sample UI 4 – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022 & 2025") ![_Sample UI 5 – Analog Designs UI Styles Ⅰ, **2022 & 2025**_](/public/photos/analog-designs/analog-designs-ui-styles-i-ui5.png "Sample UI 5 – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022 & 2025") ![_Sample UI 6 – Analog Designs UI Styles Ⅰ, **2022 & 2025**_](/public/photos/analog-designs/analog-designs-ui-styles-i-ui6.png "Sample UI 6 – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022 & 2025") The legality surrounding the exclusion of woodgrain textures from a set of music plugin UIs is a bit of a gray area. I've included a sample with woodgrain below to ensure compliance. ![_Sample UI 7† – Analog Designs UI Styles Ⅰ, **2022 & 2025**_](/public/photos/analog-designs/analog-designs-ui-styles-i-ui7.png "Sample UI 7 – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022 & 2025") † _Sample UI 7 contains an image texture for the woodgrain side panels._ ## Symbols Included with the set are `225` **line icon symbols**. I wanted to experiment with an idea I had for constructing the **grid** & **guide system**. ![_Flag Symbol with Guides – Analog Designs UI Styles Ⅰ, **2022**_](/public/photos/analog-designs/analog-designs-ui-styles-i-symbol-guides.png "Flag Symbol with Guides – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022") I used concepts from [shape building](https://helpx.adobe.com/illustrator/using/building-new-shapes-using-shape.html#shape-builder "Build new shapes with Shaper and Shape Builder tools – Adobe Illustrator Help") to **anticipate curves & placements into a set of guides**. ![_Symbol Samples – Analog Designs UI Styles Ⅰ, **2022**_](/public/photos/analog-designs/analog-designs-ui-styles-i-symbols.png "Symbol Samples – Analog Designs UI Styles Ⅰ, Alfred R. Duarte 2022") My conclusions of the **guides** were mixed. The **guides** did help speed up creation by helping me quickly place curves. But I also found them a bit restrictive for some compositions. I'll likely stick with something more traditional for most projects. --- ### [🎛️ **Book a meeting to discuss your project. [➔](mailto:alfred.r.duarte@gmail.com)**]{.highlight} [**alfred.r.duarte@gmail.com**](mailto:alfred.r.duarte@gmail.com "Gmail – Alfred R. Duarte") . ~~~ The above material is owned by the author. This file was generated with SASHA. ~~~ ~~~ llm.txt Author: Alfred R. Duarte Domain: https://alfred.ad ~~~ ![_The Centaurs AI Summit, **Davos 2025**_](/public/photos/stromback/ai-summit-panel-2025.jpg "The Centaurs AI Summit, World Economic Forum 2025") > Design # The Centaurs AI Summit, World Economic Forum 2025 > **December 2024** 1. [**Motif**](#motif) 2. [**Slides**](#slides) 3. [**Branding**](#branding) 4. [**Graphics**](#graphics) [Ollivier Oullier](https://www.weforum.org/people/olivier-oullier/ "Ollivier Oullier | World Economic Forum"), PHD (founder/CEO of [Inclusive Brains](https://www.allianz-trade.com/en_global/news-insights/news/prometheus.html "Allianz Trade x Inclusive Brains")) is a neuroscientist, AI entrepreneur and investor. He was hosting a multi-speaker summit through his organization, [The Centaurs](https://thecentaurs.ai/ "The Centaurs AI SUMMIT – Davos 2025 Edition"), on day 1 of the [World Economic Forum in Davos](https://www.weforum.org/ "The World Economic Forum"). Their new podcast, [The Centaurs AI Podcast](https://www.instagram.com/thecentaursai/ "The Centaurs AI Summit & Podcast Instagram"), launched during the summit. I provided the **pitch deck** & **event schedules**. These materials were sent to attendees prior to the event and were used during the summit. I was contracted to provide: - **Initial concepts** - **Branding** - **Decks/presentation slides** - **Scheduling posters/flyers** - **Lay the groundwork & systems for asset creation** - **Render high-quality graphics for display** (_shown on **CNBC** coverage_) > _This project can be seen as an extension of my work with [Stromback Venues](/portfolio/design/stromback-venues-davos-25-2024/ "Stromback Venues (Davos '25), 2024 | Alfred R. Duarte | Portfolio")._ For this project, the clients wanted a _hyper-modern_, _forward-thinking_ aesthetic. _Luxury = refinement_. **Simplicity** & **bold contrast** were heavily employed to offer stark and captivating visuals for viewers, with a sense of _futuristic mystique_. I continued my goal from past deck commissions: **tell visual stories while maximizing readability**. Completed materials were sent to a team in Portugal for further asset creation; ranging from videos, social content, & a website. #### Project length: **`2 weeks`** #### Materials produced: - **`1`** **deck**; _Figma_ - **`1`** **deck**; _Google Slides_ - **`9`** **slides**; _Figma_ - **`9`** **slides**; _Google Slides_ - **`17`** **high-quality display graphics**; _Figma_ #### Tools used: - [Figma](https://www.figma.com/) - [Google Slides](https://workspace.google.com/products/slides/) - [Google Drive](https://workspace.google.com/products/drive/) (Team Collaboration) ## Motif We've all been there: you design a strong motif with simple-enough patterns, send off guidelines & mockups to your client's team, and they return with... well, something else entirely. This project was incredibly fast-paced, involved multiple stakeholders' input, was under a tight deadline–and the client's team was based in Portugal while I was based in Los Angeles. I wanted to create a motif that could _truly_ be **easily adapted to different contexts**, ensuring cohesion under these extreme constraints. ![_The Centaurs AI Summit Motif & Breakdown, **2024** & **2025**_](/public/photos/stromback/ai-summit-motif-2025.png "The Centaurs AI Summit Motif, Alfred R. Duarte, 2024 & 2025") An adventurous, bold look was chosen to capture the new frontiers provided by emerging AI technologies. Focus was placed on capturing a striking elegance, rather than an over-the-top presence. A **black base**, with **periwinkle accents** & **deep-violet undertones** evoked a sense of _adventure_ & _mystery_. This motif was carried through the deck and all other assets produced for this project. ## Slides I produced the **main event deck** & **scheduling** given to attendees. - Custom **deck slides** were designed in _Figma_. - Slide content was produced based on an outline document (bullet-points) provided by the clients. - Upon approval, the _Figma_ designs were then **fully recreated in Google Slides** for presentation & sharing. - **Scheduling was produced** based on a similar outline document provided by clients. - **Slides were prepared** in _Figma_ and sent to the team in Portugal for further asset creation. ## Branding ![_The Centaurs AI Logo Icon, **2024**_](/public/photos/stromback/centaur-ai.png "The Centaurs AI Logo Icon, Alfred R. Duarte 2024") - Branding elements, including a **logo icon** & **wordmark**, were produced for the event & podcast. - Drawing on the clients' existing assets and concept of the Centaur, introduced by Chess legend Gary Kasparov–revolving around the synergy of human and artificial intelligence. - **Full concept boards** were produced in _Figma_. - **Logo concepts** were mocked in _Figma_. - Iterations and **final renders** were produced in _Figma_. ## Graphics - **High-quality display graphics** were produced for the event. - Display backgrounds, name cards, and title cards were produced. - Style matched the event aesthetic I created for the deck & branding. - **Concepts & final renders** were produced in _Figma_. - Graphics were **handed off for additional asset creation** to a team in Portugal. --- ### [🧠 **Book a meeting to discuss your project. [➔](mailto:alfred.r.duarte@gmail.com)**]{.highlight} [**alfred.r.duarte@gmail.com**](mailto:alfred.r.duarte@gmail.com "Gmail – Alfred R. Duarte") . ~~~ The above material is owned by the author. This file was generated with SASHA. ~~~ ~~~ llm.txt Author: Alfred R. Duarte Domain: https://alfred.ad ~~~ ![_Analog Designs Echo Logo Icon, **2022**_](/public/photos/analog-designs/analog-designs-echo-logo.png "Analog Designs Echo Logo Icon, Alfred R. Duarte 2022") > Design & Product # Analog Designs > **2020 – April 2025** > _**Note**: These are one-half of my responsibilities at **Analog Designs**. The companion article on my engineering responsibilities is still being developed._ 1. [**Branding**](#branding) 2. [**Design System**](#design-system) 3. [**UI Engine**](#ui-engine) 4. [**Diode**](#diode) 5. [**Website**](#website) #### Key responsibilities: - **Founder & CEO** - **Head of Product** - **Head of Design** - **Idea-to-product** - **Designed the logo & wordmark** - **Designed & implemented design systems** - **Designed marketing/branding assets** - **Developed VST 3 audio plugins** - **Developed VST 3-compatible UI framework** - **Developed and shipped OpenGL 2.1 graphics pipeline framework & graphics engine** - **Implemented custom DSP & Faust DSP pipelines** - **Collaborated with industry producers & engineers to refine products** - **Developed brand identity, visual direction, & voice** - **Created internal tools to accelerate design & engineering workflows** - **Created design systems; marketing, branding assets; graphic pipeline framework & engine** #### Tools used: - [Visual Studio](https://visualstudio.com/) - [Xcode](https://developer.apple.com/xcode/) (macOS development) - [VST 3 SDK](https://steinbergmedia.github.io/vst3_dev_portal/pages/index.html) - [Sketch](https://www.sketch.com/) - [Adobe Creative Cloud](https://www.adobe.com/creativecloud.html) (marketing assets) - [Notes (iCloud)](https://www.icloud.com/notes/) - [Calendar (iCloud)](https://www.icloud.com/calendar/) - [Reminders (iCloud)](https://www.icloud.com/reminders/) ## Branding I usually begin with simplicity. Crafting a logo from simple curves/shapes means anyone can draw it–and it's memorable. I started with a very simple logo based on the concept of **stereophony** in audio recording. The placement of your mics affect the amplitude & timing–the **phase** of the recorded wave–which produces a **stereo** effect. ![_AB Stereo, **Godbhaal 2013, Public domain, [via Wikimedia Commons](https://commons.wikimedia.org/wiki/File:AB_Stereo.svg "File:AB Stereo.svg – Wikimedia Commons")**_](https://upload.wikimedia.org/wikipedia/commons/c/c8/AB_Stereo.svg "AB Stereo, Godbhaal 2013, Public domain, via Wikimedia Commons") If a bird chirps to your left, the distance traveled by the chirp to your right ear is greater than to your left. This creates a **phase difference**, which you perceive as a small delay on your right ear. ![_Bird Chirp Phase Difference, **2025**_](/public/photos/misc/bird-chirp-phase-difference.png "Bird Chirp Phase Difference, Alfred R. Duarte 2025") I set up my mics and built out some logo concepts, keeping lines uncomplicated. ![_Analog Designs Logo Concept Ⅰ, **2022**_](/public/photos/analog-designs/analog-designs-logo-concept-i.png "Analog Designs Concept Ⅰ, Alfred R. Duarte 2022") But it felt uninspired & incomplete. I wanted to craft a logo **bold** & **mesmerizing**. I landed on the concept of the **interval of a sound wave**. Rather than fluid audio curves, I imagined acoustic pressure waves (moreso amplitude) like sonar. ![_Sonar Principle, **Georg Wiora (Dr. Schorsch) 2008, CC BY-SA 3.0, [via Wikimedia Commons](https://commons.wikimedia.org/wiki/File:Sonar_Principle_EN.svg "File:Sonar Principle EN.svg – Wikimedia Commons")**_](https://upload.wikimedia.org/wikipedia/commons/0/07/Sonar_Principle_EN.svg "Sonar Principle, Georg Wiora (Dr. Schorsch) 2008, Wikimedia Commons") Using these principles, I designed the **logo icon** motif. ![_Analog Designs Stacked Logo Iterations, **2022**_](/public/photos/analog-designs/analog-designs-stacked-logo-iterations.png "Analog Designs Stacked Logo Iterations, Alfred R. Duarte 2022") - I produced all branding elements, including the **logo icon** & **wordmark**. - Produced **brand guidelines** for the company. - Developed **color palette**, **fonts**, & **type presets** for the company. - Designed logo based around the concept of **sound wave intervals**. - Full concept boards were produced in _Sketch_. - Logo concepts were mocked in _Sketch_. - Iterations and final renders were produced in _Sketch_. ![_Analog Designs Stacked Logo Color, **2022**_](/public/photos/analog-designs/analog-designs-stacked-logo-color.png "Analog Designs Stacked Logo Color, Alfred R. Duarte 2022") ## Design System I've written a [companion article](/portfolio/design/case-study-ui-styles-i-2022/ "Case Study: UI Styles I, 2022 | Alfred R. Duarte | Portfolio") with a detailed breakdown on the **design system** I developed. ## UI Engine For more on the **OpenGL 2.1** (not ES2.1–_the 19 year old 2.1_) **graphics pipeline framework**, **graphics engine**, & **UI framework** I developed, please see [this article](/portfolio/engineering/under-construction/ "UNDER CONSTRUCTION | Alfred R. Duarte | Portfolio"). ## Diode ![_Analog Designs Diode Promo – Sketch, **2022**_](/public/photos/analog-designs/analog-designs-diode-promo.png "Analog Designs Diode Promo – Sketch, Alfred R. Duarte 2022") - `3` months idea-to-product - **Designed** using _Sketch_. - **Developed in custom UI framework**. - **Implemented custom analog modeling DSP pipelines**. - **Designed transistor-level circuit components** in _SPICE_. - **Implemented circuit components** in _Faust_, translated to _custom DSP_. - **Transistor**; **clipping network**; **biasing**; **RC filters**. - Implemented **custom DSP pipeline as a VST 3 plugin**. - **Collaborated with industry producers & engineers** to refine product. ## Website Part of **Diode**'s marketing pitch is having control over its internal circuit components. I wanted to play on that using the feeling of _blueprint_ & _wireframe sketches_. ![_Analog Designs Diode Website Mockup – Sketch, **2022**_](/public/photos/analog-designs/analog-designs-diode-site.png "Analog Designs Diode Website Mockup – Sketch, Alfred R. Duarte 2022") - `2` weeks idea-to-product. - Check it out at [analogdesigns.io](https://analogdesigns.io/ "Analog Designs | Modern Analog Modeling"). - **Landing page**, **contact**, & **terms pages**. - Mockups were produced in _Sketch_. - Final renders were **translated to and developed in** _React_ & _Tailwind_. - Custom _react-parallax_ animations. --- _This project was ultimately paused due to a shortage in personal funds._ . ~~~ The above material is owned by the author. This file was generated with SASHA. ~~~ ~~~ llm.txt Author: Alfred R. Duarte Domain: https://alfred.ad ~~~ ![_Flying Whale, **2022**_](/public/photos/spaceboy3000/flying-whale.png "Flying Whale, Alfred R. Duarte 2022") > Case Study — Design # Cartoon Stylized Graphics > **November 2022** 1. [**General**](#general) 2. [**Curves/Pen Tool**](#curves-pen-tool) 3. [**Colors/Gradients**](#colors-gradients) 4. [**Shading**](#shading) 5. [**Samples**](#samples) During _Fall 2022_, I was tasked with **mocking up a series of character designs** for a children's sticker subscription box. The client wanted a **cartoon-stylized look**, with fun, friendly characters. They asked for an "animation" style that "popped". After completing the project, I wanted to document my learnings & process. I produced a sprawling `20in` x `200in` poster detailing techniques ranging from **tooling** to **shading**, geared towards replicating the style in vector tools like **Adobe Illustrator**. I've split the poster into **`4`** sections to make it easier to include in this article. Techniques discussed are _generalized_ and based on **physical principles** (even though these are cartoons). Most techniques can be easily adapted to other styles. #### Tools used: - [Affinity Designer](https://affinity.serif.com/en-us/designer/) ## General ![_General Concepts – Cartoon Stylized Graphics Poster, **2022**_](/public/photos/spaceboy3000/cartoon-stylized-graphics-general.png "General Concepts – Cartoon Stylized Graphics Poster, Alfred R. Duarte 2022") ## Curves/Pen Tool ![_Curves/Pen Tool Concepts – Cartoon Stylized Graphics Poster, **2022**_](/public/photos/spaceboy3000/cartoon-stylized-graphics-curves-pen-tool.png "Curves/Pen Tool Concepts – Cartoon Stylized Graphics Poster, Alfred R. Duarte 2022") ## Colors/Gradients ![_Colors/Gradients Concepts – Cartoon Stylized Graphics Poster, **2022**_](/public/photos/spaceboy3000/cartoon-stylized-graphics-colors-gradients.png "Colors/Gradients – Cartoon Stylized Graphics Poster, Alfred R. Duarte 2022") ## Shading ![_Shading Concepts – Cartoon Stylized Graphics Poster, **2022**_](/public/photos/spaceboy3000/cartoon-stylized-graphics-shading.png "Shading Concepts – Cartoon Stylized Graphics Poster, Alfred R. Duarte 2022") ## Samples Below are some discarded samples that never made it to production. ![_Día de los Muertos Holiday Card Sample, **2022**_](/public/photos/spaceboy3000/ddlm-holiday-card.png "Día de los Muertos Holiday Card, Alfred R. Duarte 2022") ![_Girl Singing into Closed Hand Sample, **2022**_](/public/photos/spaceboy3000/sister-22-bday.png "Girl Singing into Closed Hand, Alfred R. Duarte 2022") ![_Turtle Cruising a Lazy River Sample, **2022**_](/public/photos/spaceboy3000/turtle-lazy-river.png "Turtle Cruising a Lazy River, Alfred R. Duarte 2022") --- ### [🎨 **Book a meeting to discuss your characters. [➔](mailto:alfred.r.duarte@gmail.com)**]{.highlight} [**alfred.r.duarte@gmail.com**](mailto:alfred.r.duarte@gmail.com "Gmail – Alfred R. Duarte") . ~~~ The above material is owned by the author. This file was generated with SASHA. ~~~ ~~~ llm.txt Author: Alfred R. Duarte Domain: https://alfred.ad ~~~ ![_Stromback Venues @ Davos 2025, **2024**_](/public/photos/stromback/stromback-davos.png "Stromback Venues, Davos 2025 Title Card, Alfred R. Duarte 2024") > Design # Stromback Venues, World Economic Forum 2025 > **November** - **December 2024** 1. [**Slides**](#slides) 2. [**Graphics**](#graphics) 3. [**Branding**](#branding) After my initial meetings with the clients, one requirement emerged: _design for the global-elite & heads-of-state_. **Stromback Venues** provides venue leasing in **Davos, Switzerland** during the [World Economic Forum](https://www.weforum.org/ "The World Economic Forum"). Decks were shown to business & political leaders, including **Elon Musk**, **BlackRock COO**, and **US government officials**. Decks were used in closing _**`25m`** **in leases**_ for the week of [WEF 2025](https://www.stromback.com/davos "Stromback – Davos Experiences"). I was contracted to provide: - **Initial concepts** - **Branding** - **Decks/presentation slides** - **Lay the groundwork & systems for asset creation** - **Render high-quality graphics for print/display** (_shown on **CNBC** coverage_) > _This project extends into a second project with [The Centaurs AI](/portfolio/design/the-centaurs-ai-summit-davos-25-2024/ "The Centaurs AI Summit (Davos '25), 2024")._ Clients came with a strong sense of how they wanted the customer to feel. My role was to interpret their vision—translating vibes into compelling concepts. Mature asset systems were handed off to a second team in Portugal to facilitate more rapid iterations, as they were in a closer time zone to Switzerland. This was a first-of-its-kind project for me, as I had never been contracted specifically for _deck slides_. I determined a simple goal: tell visual stories that excite viewers & incentivize sustained engagement. This without sacrificing the most important aspect of a deck presentation: _readability_. #### Project length: **`5 weeks`** #### Materials produced: - **`1`** **logo** (multiple iterations); _Figma_ - **`1`** **wordmark** (multiple iterations); _Figma_ - **`1`** **deck**; _Figma_ - **`1`** **deck**; _Google Slides_ - **`2`** **deck themes**; _Figma_ - **`2`** **deck themes**; _Google Slides_ - **`30`** **slides**; _Figma_ - **`30`** **slides**; _Google Slides_ - **`16`** **slide themes**; _Figma_ - **`16`** **slide themes**; _Google Slides_ - **`10`** concept **slide themes**; _Figma_ - **`3`** large-scale **print-ready graphics**; designed in _Figma_, **made print-ready** in _Illustrator_ #### Tools used: - [Figma](https://www.figma.com/) - [Google Slides](https://workspace.google.com/products/slides/) - [Adobe Illustrator](https://www.adobe.com/products/illustrator.html) (Screen-print Preparation) - [Google Drive](https://workspace.google.com/products/drive/) (Team Collaboration) ## Slides I was solely responsible for producing the main event deck, presented to prospective customers. A modern, refined, "_ultra-premium_" look was chosen to capture the emergence of generative AI across global markets. Certain slides blended a modern look with venue-specific branding, capturing their essence while maintaining overall cohesion. - Custom **deck slides** were designed in _Figma_. - Slide content was produced based on an outline document (bullet-points) provided by the clients. - Clients and I worked together to **develop a story** that I would turn into visual assets. - Upon approval, the _Figma_ designs were then **fully recreated in Google Slides** for presentation & sharing. - Further themes for **`2`** **additional decks** were produced in _Figma_, recreated in _Google Slides_, then handed off to the team in Portugal. - Client strongly preferred matching the aesthetics of the venues for the remaining **`2`** themes—a **speakeasy theme**, and a **patriotic USA theme**. 🇺🇸 ## Graphics - **Screen-printed window-graphics** were produced for an exclusive venue attended by VIP US Government officials. - Large-scale print-ready graphics were designed in _Figma_. - Designs were drafted & vectorized in _Figma_, then **made print-ready** in _Adobe Illustrator_. - Graphics were **handed off for screen-printing** near Davos, Switzerland. ## Branding - Branding elements, including a **logo icon** & **wordmark**, were produced for an _exclusive venue_ attended by VIP US Government officials. - Drawing on the clients' existing assets and the theme of the event, I drafted designs in the style of **American high-end luxury brands**. - Full concept boards were produced in _Figma_; research conducted through various **first-party sites** & **in-season winter catalogs**. - Logo concepts were mocked in _Figma_. - Iterations and final render were produced in _Figma_. --- ### [🏔️ **Book a meeting to discuss your project. [➔](mailto:alfred.r.duarte@gmail.com)**]{.highlight} [**alfred.r.duarte@gmail.com**](mailto:alfred.r.duarte@gmail.com "Gmail – Alfred R. Duarte") . ~~~ The above material is owned by the author. This file was generated with SASHA. ~~~ ~~~ llm.txt Author: Alfred R. Duarte Domain: https://alfred.ad ~~~ ![_Firecracker Card, **2025**_](/public/photos/agentic-design/firecracker-card.png "Firecracker Card, Alfred R. Duarte 2025") > Agentic Design & Product # Firecracker — On-Chain Credit Cards > **May 2025** 1. [**Research & Discovery**](#research-discovery) 2. [**v1: The Problem with "Features First"**](#v1-the-problem-with-features-first) 3. [**v2: Crypto-First Pivot**](#v2-crypto-first-pivot) 4. [**v3: Decentralization as Hero**](#v3-decentralization-as-hero) 5. [**The Winning Formula**](#the-winning-formula) The workflows **AI agents** offer allow _near instant_ iteration on ideas, market research, and product development. In this light case study, I explore how **agentic design** could be applied to product. It blends highly emergent technologies with traditional product thinking. An up–and–coming fintech company is launching a unique crypto product: ### [🧨 ***Firecracker***]{.highlight} _on-chain credit cards._ We'll use **AI agents** to explore the product space, and iterate on the product. #### Project length: **`11 hours`** #### Tools used: - [Cursor](https://www.cursor.com/) - [DeepSeek](https://chat.deepseek.com/) (Research & Discovery) - [Claude](https://claude.ai/) (Research & Product Development) - [React](https://react.dev/) - [Tailwind CSS](https://tailwindcss.com/) - [Hueshift](/portfolio/design/hueshift-2023/ "Hueshift | Alfred R. Duarte | Portfolio") (Color Palettes) # Research & Discovery [DeepSeek](https://chat.deepseek.com/ "DeepSeek – Into the Unknown") was used to analyze opportunity areas, and guide our path to **product–market fit**. [Claude](https://claude.ai/ "Claude") was used to substantiate findings and technical feasibility. Additional research was conducted to understand how **sBTC** unlocks Bitcoin-backed credit without custody risks. ### Competitor gaps Existing crypto cards (like [Crypto.com](https://crypto.com/us/cards "Crypto.com Visa Card: The only card you need"), [BitPay](https://www.bitpay.com/card "Crypto Debit Card by BitPay | Turn Bitcoin Into Dollars Fast. Get Cash Back.")) rely on custodial models, and fiat transactions with crypto-incentives. This misses the DeFi-native user, who wants to ***pay in BTC & settle in BTC***. ### User pain points Nearly **`1`** in **`5`** cryptocurrency owners have had difficulty accessing or withdrawing crypto funds from custodial platforms.[^1] Custodial platforms force users to lock their crypto in a centralized system, and ask for permission to spend it. ### Technical feasibility **sBTC**’s trustless peg allows automated collateralization without centralized issuers (like banks). You put **BTC** in, and get a loan amount—no asking, instant approval. > [**Key insight:**]{.highlight} **_Decentralization_ was the unmet need, not just "crypto rewards."** ## _Learn a lil' crypto corner_ 💳 When you use a traditional credit card, the bank is the middleman. They are the custodian of your money—they decide when you can use it, and how much you can spend. Decentralization is about removing the middleman. [sBTC](https://stacks-network.github.io/stacks/sbtc.html "sBTC: Design of a Trustless Two-way Peg for Bitcoin") enables you to transfer Bitcoin into smart contracts using [**Stacks' Layer 2** (L2) protocol](https://www.stacks.co/learn/introduction "Stacks 101 - Learn about Stacks and why Bitcoin"). You lock your **BTC** into **sBTC**, allowing you to interact with **dApps** that can _instantly_ process your transactions against programmable conditions (like an NFT). An **on-chain credit card company** could setup a contract to issue lending against your **sBTC**. 1. You lock your **BTC** into **sBTC**. 2. You deposit your **sBTC** into the credit card company's smart contract. 3. The company mints a credit card NFT for you—no custody, zero permissions. 4. You spend on your collateralized **BTC**. You can even toss your **sBTC** into a _decentralized_ liquidity pool to earn interest. Other borrowers can borrow against the pool, and you earn interest on the collateral you supplied. Zero custodial risk, backed by 100% Bitcoin finality.[^2] # v1: The Problem with "Features First" ### [[**Visit the v1 Site ➔**]{.highlight}](https://fintech-ghost-1.vercel.app/) ### Conversion Killer 🥶 **Generic fintech language** ("seamless," "global access") failed to stand out in the crypto space. **Multi-step signup process** added friction and discouraged users from completing the conversion. ### Missed Opportunity 😶‍🌫️ Value proposition buried below fold under a generic card design. > [**Key Insight:**]{.highlight} **Crypto users need immediate proof of decentralization.** @[110%](https://fintech-ghost-1.vercel.app/) # v2: Crypto-First Pivot ### [[**Visit the v2 Site ➔**]{.highlight}](https://fintech-ghost-2.vercel.app/) ### What Worked 🤔 "**…on-chain credit**" hero text speaks _directly_ to Web3 users. Added **social proof** ("thousands of early adopters"). More attractive animated card mockup creates desire. ### New Problem 😵‍💫 Overwhelming **information density** (6 features + metrics). > [**Key insight:**]{.highlight} **Crypto credit users need to _instantly_ understand the value proposition.** @[110%](https://fintech-ghost-2.vercel.app/) # v3: Decentralization as Hero ### [[**Visit the Finalized Site ➔**]{.highlight}](https://fintech-ghost-3.vercel.app/) ### Breakthrough Changes 😎 "**Fully decentralized credit card**" above fold instantly communicates _differentiation_. Card-as-NFT implied through expiration date (**04/28 ≅ Bitcoin halving year**[^3]). DeFi integration as a **primary feature** (not just "rewards"). ### Path to PMF 🏎️ **Monetize decentralization by packaging it as exclusive next–gen access.** > [**Key insight:**]{.highlight} **Crypto users value sovereignty above all other features/metrics.** @[110%](https://fintech-ghost-3.vercel.app/) # The Winning Formula 🎯 ### [Decentralization Promise (`v2`/`v3`) + Desire (`v2`) + Urgency (`v3`)]{.highlight} ## Why This Works for Crypto Audiences ### **Trust Through Transparency** 🤝 [**v3's "secured by you"**]{.highlight} > **`v1's "complete ownership"`** (_passive_ vs _active_) ### **Scarcity Engineering** 👀 No explicit waitlist number creates **artificial scarcity** (vs v2's "thousands") ### **Technical Credibility** 🔐 "Cryptographically secured on-chain" satisfies advanced users without alienating beginners. Language meets users where they are, highlighting the unique value proposition, without leaning on technical jargon. ## Conclusions 💭 Crypto users prioritize _ownership over convenience_. A path to **PMF**: - Monetize decentralization by making it aspirational; - Make the product experience feel unique and exclusive; - And make the product feel secure and trustworthy. In crypto, trust is currency. ***Amplify trust, simplify the experience.*** --- ### [🧨 **Want to uncover your product's PMF? [➔](mailto:alfred.r.duarte@gmail.com)**]{.highlight} [**alfred.r.duarte@gmail.com**](mailto:alfred.r.duarte@gmail.com "Gmail – Alfred R. Duarte") [^1]: Cryptocurrency Adoption and Consumer Sentiment Report, [Security.org](https://www.security.org/digital-security/cryptocurrency-annual-consumer-report/ "2025 Cryptocurrency Adoption and Consumer Sentiment Report | Security.org") [^2]: Bitcoin has 100% finality on the Stacks L2 protocol thanks to the Nakamoto upgrade, [Stacks.org](https://stacks.org/nakamoto-is-here "Finally, Finality: Nakamoto Upgrade Sets the Stage sBTC, Bitcoin for Builders") [^3]: The fifth Bitcoin halving is anticipated around April 2028, [Kraken.com](https://www.kraken.com/learn/bitcoin-halving-history#article-block-9943a4d1f561 "What is Bitcoin Halving? | Blockchain.com") ~~~ The above material is owned by the author. This file was generated with SASHA. ~~~ ~~~ llm.txt Author: Alfred R. Duarte Domain: https://alfred.ad ~~~ ![_**LAYMAN** Logo, **2025**_](/public/photos/layman/layman-logo.png "LAYMAN Logo, Alfred R. Duarte 2025") > Engineering & Design # **LAYMAN** — Figma to SwiftUI Generator > **August 2024 — Present** **LAYMAN** is a [Figma Dev Mode](https://www.figma.com/plugin-docs/working-in-dev-mode/ "Working in Dev Mode | Plugin API | Figma") plugin that converts [Figma Frames](https://www.figma.com/plugin-docs/api/FrameNode/ "FrameNode | Plugin API | Figma") into [SwiftUI View](https://developer.apple.com/documentation/swiftui/view "View | Apple Developer Documentation") code. ***Below is a fork of the original project README.*** --- - [**Render as a Single File**](#render-as-a-single-file) - [**Generator Tags**](#generator-tags) - [**SwiftUI Control Transformations**](#swiftui-control-transformations) - [**Generator Settings**](#generator-settings) - [**Changes & Additions**](#changes-additions) - [**Upcoming Fixes**](#upcoming-fixes) - [**Upcoming Features**](#upcoming-features) ## Render as a Single File The generator can be configured to render all code as a single file. The file can be copy-pasted into a single Swift file that should compile without any additional dependencies. This is useful for testing and observing components during design. The following generator settings must be set: - **Effect Style Output** `Render in Place` - **Custom Fonts** `System (dynamic)` - **Text Style Output** `Render in Place` - **Component Instances** `Deep Render` - **Node Parent** `Crawl Document` - **Render Mode** `Preview` - **Vector Images** `Render in Place` - **Figma Variables** `Render in Place` ## Generator Tags Generator tags can be used to define custom behaviours, properties, or conversions for a layer. Tags are denoted by `{}` and are placed at the end of a layer name. **_Syntax:_** ### `{tag props}` - **`{}`** Denotes a generator tag. - **`tag`** The name of the tag. - **`props`** A comma-separated list of properties for the tag. - A colon separates a property name & value. - Strings are wrapped in double quotes. _Example:_ ``` Layer Name {geo name: geometryScreen, maxWidth} ``` --- ### **`{aspect}`** > _Alts:_ **`{aspectRatio}`** > _Props:_ > > - **`mode`** (_Optional_) The sizing mode, either `fit` or `fill`. Default is `fill`. > - **`ratio`** (_Optional_) The ratio, as a string. Ex: `16:9` Aspect Ratio. Applies an `.aspectRatio()` modifier to the view. If `mode` is set to `fit`, the aspect ratio is maintained while fitting the view within the frame. If `mode` is set to `fill`, the aspect ratio is maintained while filling the frame. If `ratio` is set, a custom aspect ratio is used. Otherwise the aspect ratio of the frame is used. --- ### **`{convert}`** > _Alts:_ **`{converter}`** > _Props:_ > > - **`view`** The name of the view. > - **`props`** (_Optional_) Include the properties of the original view. > - **`children`** (_Optional_) Include the children of the layer. > - **`mod`** (_Optional_) Include the modifiers for the layer. > - **`context`** (_Optional_) The context available within the view. > - **`params`** (_Optional_) A string to render as the view's parameters. Custom View Conversion. Converts the layer to a custom defined view. Only the custom `view` name is included if no additional parameters are set. If `props` is present, the properties of the original view are included in the view. For text layers, this is the text content. If `children` is present, the children of the layer are included in the view. If `mod` is present, the modifiers for the layer are included in the view. If `context` is set, the custom context will be made available within the view. If `params` is set, the string is rendered as the view's parameters. If a layer name contains more than one `{convert}` tag, only the first tag is considered. --- ### **`{geo}`** > _Alts:_ **`{geometry}`** > _Props:_ > > - **`name`** (_Optional_) Custom name for the geometry. > - **`maxWidth`** (_Optional_) Use the width of the geometry for the maximum width. > - **`maxHeight`** (_Optional_) Use the height of the geometry for the maximum height. Geometry Reader. Wraps the view in a `GeometryReader`. The size of the geometry is used in place of any `fill` sizing properties. If `maxWidth` is set, the width of the geometry is applied to the maximum size of the view. The same applies for `maxHeight`. Using both will use the full size of the geometry for the maximum size of the view. If neither is set, the geometry is ignored for sizing the view. --- ### **`{hide}`** > _Alts:_ **`{hidden}`** Hide Layer. Remove the view & its children from rendering. --- ### **`{import}`** > _Props:_ > > - **`lib`** The name of the library to import. Library Import. If `Preview` mode is set, adds a library import to the top of the file. --- ### **`{mod}`** > _Alts:_ **`{modifier}`** > _Props:_ > > - **`name`** The name of the modifier. > - **`stack`** (_Optional_) The name of the modifier stack to apply the modifier to. > - **`pre`** Prerender. > - **`content`** Content & text. > - **`frame`** Sizing & padding. > - **`style`** View styling. > - **`layout`** Positionin. > - **`effect`** Effects. > - **`post`** Postrender. > - **`stackPos`** (_Optional_) The position where the modifier should be placed in the stack. > - **`first`** Place the modifier at the beginning of the stack. > - **`last`** Place the modifier at the end of the stack. > - **`start`** Place the modifier at the beginning of the stack. > - **`end`** Place the modifier at the end of the stack. > - **`props`** (_Optional_) A comma-separated list of properties to apply to the modifier. View Modifier. Applies a Modifier to the View. If `modStack` is set, the modifier is applied to the named modifier stack. For example, `style` would apply the modifier to the same level as `.background()` & `.border()` modifiers. If no stack is found matching the name, the modifier is added to the `post` stack. --- ### **`{reader}`** > _Props:_ > > - **`geo`** (_Optional_) The name of the geometry to read from. > - **`maxWidth`** (_Optional_) Use the width of the geometry for the maximum width of the view. > - **`maxHeight`** (_Optional_) Use the height of the geometry for the maximum height of the view. Read the size from a parent `GeometryReader`. The size of the parent geometry is used in place of any `fill` sizing properties. If `geo` is set, the geometry with the matching name is used. If no name is set, the closest parent geometry is used. If no geometry is found matching the name, the reader is ignored. If `maxWidth` is set, the width of the geometry is applied to the maximum width of the view. The same applies for `maxHeight`. Using both or neither will use the full size of the geometry for the maximum size of the view. --- ### **`{safeArea}`** > _Props:_ > > - **`ignore`** The edges of the safe area that will be ignored. > - **`all`** (_Default)_ Ignore all edges. > - **`horizontal`** Ignore both leading & trailing edges. > - **`vertical`** Ignore both top & bottom edges. > - **`top`** Ignore the top edge. > - **`bottom`** Ignore the bottom edge. > - **`left`** Ignore the leading/left edge. > - **`right`** Ignore the trailing/right edge. Ignore edges of the safe area. Allows content to flow outside the bounds of the safe area. `ignore` **_must_** be present. If set, the specified edges are ignored. If no edges are set, all edges are ignored. --- ### **`{scroll}`** > _Alts:_ **`{scrollView}`** > _Props:_ > > - **`content`** (_Optional_) If present, wraps the **_content_** in a ScrollView. > - **`axis`** (_Optional_) The axis along which scrolling happens. Default is the major axis of the layer. > - **`vertical`** Vertical scrolling. > - **`horizontal`** Horizontal scrolling. > - **`all`** Both vertical and horizontal scrolling. > - **`showIndicators`** (_Optional_) Boolean value to show scroll indicators. Default is `true`. > - **`hideBars`** (_Optional_) If present, hides the scroll bars. Overrides `showIndicators`. Scroll View. Wraps the view in a `ScrollView`. If `content` is set, the **_content_** is wrapped in the scroll view. --- ### **`{view}`** > _Props:_ > > - **`name`** The name of the view. > - **`context`** (_Optional_) The context available within the view. > - **`params`** (_Optional_) A string to render as the view's parameters. Custom View. Wraps the layer in a custom defined view. If `context` is set, the custom context will be made available within the view. If `params` is set, the string is rendered as the view's parameters. If a layer name contains more than one `{view}` tag, views are nested within each other. The first tag is the outermost view. --- ### **`{viewModel}`** > _Props:_ > > - **`name`** The name of the ViewModel property. > - **`as`** (_Optional_) Defines the structure of the rendered property. > - **`var`** (_Default_) Render a variable. > - **`func`** Render a function. > - **`type`** (_Optional_) The type of the ViewModel property. For functions, the return type. > - **`value`** (_Optional_) The value of the ViewModel property. For functions, the function body. > - **`params`** (_Optional_) The parameters of the ViewModel function. > - **`prefix`** (_Optional_) A prefix to add to the ViewModel variable. ViewModel Properties. Adds a property to the ViewModel. Creates either a variable or a function. This is useful for creating custom placeholders in the `ViewModel` panel. These placeholders can be used to manage the state of any element in the main view. --- ## SwiftUI Control Transformations Layers names can be transformed into SwiftUI controls. For example, a layer named `Divider` will be exported as a SwiftUI `Divider` view. > _Note: Component Instances are **not** transformed._ _Example:_ ```swift Accept Button ``` ↓↓ ```swift Button(action: $viewModel.acceptButtonAction) { // Accept Button Layer } ``` ### **`Button`** A layer name that contains `Button` will be exported as a SwiftUI `Button` view. A custom action will be added to the `ViewModel` panel. ### **`Divider`** A layer name that contains `Divider` will be exported as a SwiftUI `Divider` view. If the layer contains auto layout, the divider will be exported with padding applied. If the layer contains fixed sizing, the divider will be exported with a fixed size along the major axis of the parent view. ### **`Picker`** A layer name that contains `Picker` will be exported as a SwiftUI `Picker` view. The layer is transformed into a `Picker` view and children become selectable items. The name of the layer is used as the label for the `Picker`. A custom selection binding, as an integer, will be added to the `ViewModel` panel. Item `1` is selected by default. > _Tip: Wrap the `Picker` in a `Form` or `List` view to display the selection label._ ### **`SecureField`** A layer name that contains `SecureField` will be exported as a SwiftUI `SecureField` view. If the layer is a text layer, the text content will be used as the placeholder text. A custom text binding will be added to the `ViewModel` panel. ### **`Slider`** A layer name that contains `Slider` will be exported as a SwiftUI `Slider` view. A custom value binding will be added to the `ViewModel` panel. ### **`Spacer`** A layer name that contains `Spacer` will be exported as a SwiftUI `Spacer` view. ### **`TextField`** A layer name that contains `TextField` will be exported as a SwiftUI `TextField` view. If the layer is a text layer, the text content will be used as the placeholder text. A custom text binding will be added to the `ViewModel` panel. ### **`Toggle`** A layer name that contains `Toggle` will be exported as a SwiftUI `Toggle` view. A custom state variable will be added to the `ViewModel` panel. --- ## Generator Settings ### **Effects** #### **Backdrop Blur Output** Technique used to render backdrop blur effects. - **`Material`** Render as SwiftUI Materials. Requires predefined blur types. System default. | Radius | Swift Material | | ------ | ------------------------ | | `100` | `.ultraThin` _(default)_ | | `200` | `.thin` | | `300` | `.regular` | | `500` | `.thick` | | `600` | `.ultraThick` | | `1000` | `.bar` | - **`Modifier`** Render as a custom global modifier based on UIKit Materials. Requires predefined blur types. | Radius | UIKit Material | | ------ | ------------------------------- | | `0` | `.extraLight` _(default)_ | | `1` | `.light` | | `2` | `.dark` | | `3` | `.extraDark` | | `4` | `.regular` | | `5` | `.prominent` | | `6` | `.systemUltraThinMaterial` | | `7` | `.systemThinMaterial` | | `8` | `.systemMaterial` | | `9` | `.systemThickMaterial` | | `10` | `.systemChromeMaterial` | | `11` | `.systemUltraThinMaterialLight` | | `12` | `.systemThinMaterialLight` | | `13` | `.systemMaterialLight` | | `14` | `.systemThickMaterialLight` | | `15` | `.systemChromeMaterialLight` | - **`View`** Render as a custom view modifier. Allows for any blur radius amount. High accuracy, low performance. #### **Effect Style Output** How Figma Effect Styles are rendered. - **`Modifier`** Render as a custom view modifier. The name of the effect style is used as the modifer name. Global style modifiers are generated in the `Effect Styles` panel. - **`Render in Place`** Render the effect style in place. No global style modifiers are generated, and the effect style is rendered directly on the view. #### **Materials on Shapes** Swift Materials can be applied to shapes as either the background of the content area or as the shape fill. - **`Content Background (default)`** Render materials as `.background()` modifiers. The material is applied to the entire content area of the shape. - **`Shape Fill`** Render materials as `.fill()` modifiers. The material is applied to the shape fill. ### **Fonts** #### **Custom Fonts** Allow custom fonts to be exported, or use system fonts. The system font can match to constants or take on dynamic sizes & styles from the design. - **`Link by Name`** Use the name of the font in the design to match to a custom font. - **`System Font (dynamic) (default)`** Use the system font for the platform as `.system()` modifiers with custom sizes matching the font size in the design. - **`System Font (fixed)`** Match the system font constant nearest to the font size in the design. #### **Font Weight Matching** Font weights can be matched to system font weights or use the name of the font weight in the design (unsupported). - **`Figma Variables (unsupported)`** Use the name of the font weight in the design as a font weight. Currently unsupported by the Figma API, as font style is used for both font style and font weight. - **`System (default)`** Match the system font weight contstant to the font weight in the design. #### **Letter Spacing Style** iOS renders spacing in text by kerning characters. Figma render spacing in text by tracking characters. - **`Kerning (iOS) (default)`** Translate the letter spacing to kerning. - **`Tracking (Figma)`** Keep the letter spacing as tracking. #### **Line Height** Figma allows setting the line height, whereas iOS only natively supports spacing between lines. - **`Keep Figma`** Keep the line height as set in Figma. - **`Remove (default)`** Remove the line height. iOS will apply the default line height. #### **System Font Size** The iOS Dynamic Type system allows for text to be resized based on the user's preferred text size. When `Custom Fonts` is set to **System**, this setting defines how to translate the font sizes in the design to the system font size. - **`Large (default)`** Use the large system font size set. #### **Text Style Output** Text styles can be exported as custom view modifiers or rendered in place. - **`Modifier (default)`** Render text styles as custom view modifiers. The name of the text style is used as the modifier name. Global style modifiers are generated in the `Text Styles` panel. - **`Render in Place`** Render the text style in place. No global style modifiers are generated, and the text style is rendered directly on the view. ### **Generator** #### **Component Instances** Instances of components can be exported as a single component, or as a deep render of the component. - **`Deep Render`** Deep render the component in place of the instance. - **`Link by Name (default)`** Use the name of the component in the design as the name of a custom View. The custom View is then called in place of the instance. #### **Error Output** Choose whether to output errors in the console terminal. In the Figma Desktop app, `Menu Bar` > `Plugins` > `Development` > `Show/Hide Cconsole`. - **`Show`** Show errors in the console terminal. - **`Hide (default)`** Hide errors in the console terminal. #### **Instance Layer Tags** Layer Tags can be processed on Component Instances. This feature can be disabled, for example, when main components were already rendered with the tags. - **`Enabled`** Process Layer Tags on Component Instances. - **`Disabled (default)`** Do not process Layer Tags on Component Instances. #### **Layer Tags** Layer Tags are special tags that can be used to define custom behaviours or properties to a layer. Tags are denoted by `{}` and are placed at the end of a layer name. - **`Enabled (strict)`** Enable Layer Tags. Tags must be correctly formatted/capitalized to be recognized. Unrecognized tags are ignored. - **`Enabled (unstrict) (default)`** Enable Layer Tags. Formatting & capitalization of tags are ignored. Unrecognized tags are ignored. - **`Disabled (faster)`** Disable Layer Tags. #### **Node Parent** By default, the Figma API does not provide the parent of the selected node. This setting allows the document to be crawled to determine the parent of the selected node. Very slow, as the entire document may be crawled. - **`Crawl Document (default)`** Crawl the document to determine the parent of the selected node. - **`Only Selected (faster)`** Only use the selected node as the topmost parent. #### **Progress Bar** The total amount used for the progress bar displayed in the console terminal. The progress bar is used to indicate the progress of the code generation process. In the Figma Desktop app, `Menu Bar` > `Plugins` > `Development` > `Show/Hide Cconsole`. - **`1000`** - **`500`** - **`300`** - **`100 (default)`** - **`Disabled`** #### **Render Mode** The render mode determines how the code is generated in the main `SwiftUI` panel. The code can be rendered as a full preview, just a struct, or as only view code. - **`Snippet`** Render only the view code. - **`Struct`** Render view as a complete struct. - **`Preview (default)`** Render the full preview for the struct. #### **Vector Images** Vector images can be rendered as `Path` views or linked as a custom view. Vectors are rendered in the `Vector Images` panel. > _Note: Vector image render is very slow._ - **`Disabled (default)`** Do not render vector images. - **`Link by Name`** Render the vector image in the `Vector Images` panel. The name of the vector image is used as the name of the custom view linked in the code. - **`Render in Place`** Render each vector image as a `Path` view in the code. Potentially very slow. ### **METADATA** #### **Component Descriptions** Figma components may contain multiline descriptions. These descriptions can be exported as multiline comments in the output code. - **`As Comments (default)`** - **`Disabled`** #### **Component Links** Figma components may contain links to documentation, etc. These links can be exported as comments in the output code. - **`As Comments (default)`** - **`Disabled`** #### **Component Set Names** Figma components may be part of a component set. The name of the component set can be exported as a comment with each component variant in the output code. - **`As Comments (default)`** - **`Disabled`** #### **Document Properties** The entire document's properties can be exported as a single line JSON string in the `Document Properties` panel. > _Note: This feature is currently in beta. Only the document ID can be exported. lol_ - **`Enabled (beta)`** - **`Disabled (default)`** #### **Layer Names** Figma layers have names. These names can be exported as comments in the output code. - **`As Comments (default)`** - **`Disabled`** #### **Layer Properties** All properties of a layer can be exported as a single line JSON string in the output code. - **`As Comments (default)`** - **`Disabled`** #### **Local Variables** All variables local to the current document can be exported in the `Local Variables` panel. > _Note: This feature is currently in beta. Only some variables can be exported._ - **`Enabled (beta)`** - **`Disabled (default)`** #### **Team Libraries** Figma teams may have libraries shared across the team. All libraries present in the current document can be exported in the `Team Libraries` panel. > _Note: This feature is currently in beta._ - **`Enabled (beta)`** - **`Disabled (default)`** ### **SWIFTUI** #### **Control Actions** SwiftUI controls may contain actions. Placeholder action can be exported inline with the control or as a separate function in the `ViewModel` panel. - **`Render in Place`** Render a placeholder action inline with the control. - **`ViewModel (default)`** Render a placeholder action in the `ViewModel` panel. #### **Duplicate Actions** Two layers with identical names will produce ViewModel functions with identical names. A number suffix can be added to any duplicates found. - **`Index Instances (default)`** Append a number suffix to duplicate ViewModel functions. - **`Marge (default)`** Merge duplicate ViewModel functions under a single name. #### **SwiftUI Controls** Layer names can be transformed into SwiftUI controls. For example, a layer named `Divider` will be exported as a SwiftUI `Divider` view. - **`From Layer Name (default)`** Layer names are transformed into SwiftUI controls. - **`Include Instances`** Component Instances will also be transformed into SwiftUI controls. - **`Disabled`** Layer names are not transformed. #### **ViewModel Actor** SwiftUI provides concurrency with thread Actors. The `ViewModel` can be exported with the `@MainActor` global actor to ensure all operations are performed on the main thread. - **`Basic Actor (default)`** Render the `ViewModel` panel as a basic actor. - **`Main Actor`** Render the `ViewModel` panel with the `@MainActor` global actor. ### **Output** #### **Dynamic Size** Figma layers may contain dynamic sizing properties, such as `fill` width or height. The Figma API does not provide these properties. This setting allows for the dynamic sizing properties to be exported. To use this feature, `Node Parent` must be set to `Crawl Document`. - **`Always (default)`** Export dynamic sizing properties for any layer. - **`Must Select Parent`** To export dynamic sizing properties, a parent containing auto layout must be selected. #### **Image Aspect Ratio** Images can be exported with their aspect ratio preserved. A `resizable` modifier is applied to the image view to respect the aspect ratio. - **`Mesh from Frame`** Use the frame dimensions to create a resizable mesh. - **`Preserve (default)`** Preserve the aspect ratio of the image and apply a resizable modifier. #### **Layer Mask** Figma layers may contain masks. Masking is not currently supported by the generator. Masks can be disabled from output or rendered as shapes for testing. - **`Disable (default)`** Do not render masks. - **`Render as Shape`** Render masks as shapes. #### **No Layer Name** When a layer contains no name, it must be given a unique name to be exported. This setting chooses the default name to be given to layers without a name. - **`"FigmaLayer" (default)`** - **`"Unnamed"`** ### **Transforms** #### **Opacity Transforms** Figma represents opacity as `33%` percentages. SwiftUI represents opacity as `0.33` decimals. This setting allows for the conversion of opacity values. - **`Enabled (default)`** Convert opacity values to decimals. - **`Keep Figma`** Keep opacity values as integer percentages. #### **Variable Transforms** Exact values used in Figma may not translate directly to SwiftUI. This setting enables transformations of variables to better match SwiftUI. When a Figma variable is transformed, a `_mod` suffix is added to the variable name. - **`Enabled (default)`** Apply transformations to variables. - **`Keep Figma`** Keep variables as they are. ### **Variables** #### **Duplicate Variables** Figma variables may be used multiple times in a design. This setting allows for the identification of duplicate variables. - **`Index Instances`** - **`Merge (default)`** #### **Figma Variables** Figma variables used in the design are exported in a separate `Variables` panel. This setting enables the rendering the value of each variable in place of the variable name in the output code. - **`Link by Name (default)`** Use the name of the variable in the design as the name of the variable in the code. - **`Render in Place`** Render the value of the variable in place of the variable name in the code. ## Changes & Additions This plugin is supplied with extended support for SwiftUI, as well as many fixes & opinionated changes. - **Only Supports SwiftUI** - **Updated Swift 6.0 Output** Generated code implements new features inctroduced in Swift 6, like updated Preview Providers. - **Strict SwiftUI Compliance** Figma designs are translated as accurately as possible with code generated to the latest Swift 6.0 standard. - **Generator Tags** Added support for parsing generator tags in Figma layer names. Tags can be used to define custom behaviours or properties to a layer. Tags are denoted by `{}` and are placed at the end of a layer name. - **Accurate Layout Properties** Layout properties are now exported accurately. Frames take into account both horizontal & vertical alignment when translating to Swift. Responsive layout properties are now supported. Nested elements are now exported with responsive layout properties. - **GeometryReader Support** Geometry Readers are now supported. Geometry Readers can be used to pass the size of a parent view to a child view. Geometry Readers are exported as `GeometryReader` views. - **ScrollView Support** Scroll Views are now supported. Scroll Views can be used to scroll the inner content of a view. Scroll Views are exported as `ScrollView` views. - **Basic Vector Rendering** Basic vector shapes are now supported. Vectors are exported as `Path` views. Rendering is expensive and may not be accurate/precise. - **Correct Border Rendering** Border properties for frames are now exported correctly as `.border()` modifiers. Previously, they were exported as `.overlay()` modifiers containing `Rectangles` with strokes. If a frame has a nonuniform border, only then will the border will be exported as an overlay (as SwiftUI does not support individual border sizes on stacks). > _Note: To export as `.border()`, use a single **uniform border** size with **center alignment**._ - **Individual Border Sizes** Individual border sizes are now supported. SwiftUI does not support individual border sizes on stacks. Different techniques are used based on the border alignment. | Align | Output | Accuracy | | ------- | --------------- | --------- | | Inside | `.overlay()` | 🔵 Medium | | Center | `.overlay()` | 🟡 Poor | | Outside | `.background()` | 🟢 High | For outside borders, a special technique using two `Rectangles` is used to simulate the border. This technique preserves rounded corners. A limitation is that certain uneven border sizes may not be supported, usually nonproportional increases to each side of the top & bottom anchors. ```ts /* Figma => SwiftUI */ 1 2 ________ ======== 2 || || 2 => 2 || || 2 || || || || 0 0 // fig1. Unsupported border ⚠️ ``` ```ts /* Figma => SwiftUI */ 2 2 ======== ======== 2 || 0 => 2 || 0 || || 0 0 // fig2. Supported border ✅ ``` > _Note: Only **Frame** & **Rectangle** layers currently support individual border output._ > _Note: Dashed borders are not supported._ - **Correct Shape Fill Rendering** Shape fills are now exported correctly as `.fill()` modifiers. Previously, they were exported as `.background()` modifiers. `.fill()` fills a shape, while `.background()` is applied to the entire background boundaries of a view. `.background()` modifiers are now only used for frames. Shapes without a fill correctly have `.fill(Color.clear)` applied. - **Text Style Rendering** Text Styles are now fully exported as Text View modifiers. Previously, only style snippets were provided. Text Views are exported with their associated custom modifier applied. Duplicate text styles are identified and exported as a single custom modifier. - **Accurate System Fonts** System fonts are now accurately exported. Previously, system fonts were exported as custom fonts. System fonts are now exported as either a system font constant or as `.system()` modifiers for custom sizes. Custom fonts are still supported. - **Effect Style Rendering** Effect Styles are now fully exported as View modifiers. Previously, only modifiers were provided. Effects are exported with their associated custom modifier applied. Duplicate effects are identified and exported as a single custom modifier. - **Figma Component Support** Figma Components & Instances are now recognized and exported correctly. Instances can be deep rendered or exported using the layer name. Varients & Overrides are not yet supported. - **SwiftUI Image Support** Image fills are now supported. Images are exported as `Image` views. The image source is set to the name matching the name of the asset available in the **Assets** section at the bottom of the Dev Mode sidebar. All image properties supplied by the Figma API are supported. - **Figma Variables** Added full support for Figma variables inside generated code. - color - ✅ solid - ✅ gradient - ✅ image - ⏳ multiple fills - sizing - ✅ width - ✅ minWidth - ✅ maxWidth - ✅ height - ✅ minHeight - ✅ maxHeight - spacing - ✅ gap > _Note: SwiftUI applies default spacing, so a value will always be exported to maintain the gap present in your designs._ - ✅ padding - ✅ left - ✅ right - ✅ top - ✅ bottom - border - ✅ color - ✅ width - ✅ all - ✅ left - ✅ right - ✅ top - ✅ bottom - ✅ alignment - ✅ center - ✅ inside - ✅ outside - ✅ corner radius > _Note: SwiftUI does not support individual corner radii._ - typography - ✅ text styles - ✅ color - ✅ characters (text content) - ✅ font family - ✅ font size - ✅ font weight - ✅ font style - ✅ line height - ✅ letter spacing > _Note: SwiftUI has poor support for paragraph spacing._ - ✅ opacity - effects - ✅ drop shadow - ✅ color - ✅ x - ✅ y - ✅ blur - ✅ spread > _Note: SwiftUI does not support drop shadow spread._ - ✅ background blur - ✅ radius - ✅ layer blur - ✅ radius - ✅ effect styles - ⏳ multiple effects > _Note: Only a single instance of each effect type, per frame, is currently supported._ - **Figma Variable Output** Figma Variables are now collected and exported in a separate `Variables` sections, similar to the default Figma code snippet generator. This allows for easy access to variables used in the design. - **Variable Transformations** Transformations for variables are now supported. For example, a shadow with identical size values will be twice as large on iOS as it is in Figma. Transformations are now applied to the definition of a variable constant (or when rendering variables in place). - **SwiftUI Control Transformations** Layers names can be transformed into SwiftUI controls. For example, a layer named `Divider` will be exported as a SwiftUI `Divider` view. - **ViewModel Rendering** Added support for constructing a ViewModel. The `ViewModel` panel contains the `ViewModel` struct, which can be used to manage the state of any SwiftUI Controls in the view. - **Better Position Offsets** Absolute positions are now more accurately translated into offsets. Previously, position offsets were not calculated. - **Background Blur Support** Background blur effects are now supported as 3 separate blur types. Using either Swift Materials, a custom global modifier based on UIKit Materials, or a custom view modifier. Swift & UIKit materials required predefined blur types, while the custom view modifier allows for any radius amount. - **True Backdrop Blur Support** True backdrop blur effects are now supported thanks to [this](https://stackoverflow.com/a/73950386) ([code](https://gist.github.com/Rukh/0eeedcb99fe057d1dba00d426c3fa105)). A custom effect view is created and placed behind the rendered view, then the two views are wrapped in a `ZStack`. A second technique thanks to [this](https://medium.com/@edwurtle/blur-effect-inside-swiftui-a2e12e61e750) ([code](https://gist.github.com/edwurtle/98c33bc783eb4761c114fcdcaac8ac71)) can also be applied, though essentially mimics SwiftUI materials. - **Swift Context Rendering** Support for contexts within SwiftUI views has been added. Contexts are used to pass values down the view hierarchy. For example, a `GeometryReader` context can be used to pass the size of a parent view to a child view. - **Layer METADATA Output** Layer METADATA can now be rendered as comments in the output code. Supports layer names, component descriptions and links. Component Set Names are exported with their Variants. An entire layer's METADATA properties can be exported as a single line JSON string. - **Parser Stack Improvements** The internal parser has been rewritten to support modifier groupings within the modifiers stack. As Swift is declarative, the order of modifiers is important. Modifier groups allow for high accuracy in the ordering of modifiers. Groups also allow for high precision in the placement of new modifiers within the stack. - **Removed Auto Layout Inferral** To simplify renderer impl, the `Inferred Auto Layout` calculation was removed. You must manually add Auto Layout to a frame to get layout properties. If you are unsure of what this means, you are unaffected. - **Future Proofing** The plugin has been rewritten to be more modular and extensible. Future updates will be easier to implement. ### Things to Note - A Figma Frame that contains no nested children will be exported as a `Rectangle` with the frame's properties. The Figma API classifies this unique case as a `Shape` and not a `Frame`; there is no way to discern between the types. This is a limitation of the Figma API and not the plugin. - Figma does not support variables for font weights. The Figma API uses the font style as both the font style and font weight. All font weight are conerted to their system font weight equivalent. - A `SceneNode` in the Figma API is a generic class that represents a branch. It is a branch within the broader HAST structure of the Figma document. It is _not_ an object that can be rendered, as JSON for example. - In the parser, the original `cloneNode()` function **_removes_** all references to Figma components. This will need to change to fully support Figma Components. - The original `frameToRectangleNode()` function can be made optional. also: in the interest of time lots of `any`. sorry mom. ## Upcoming Fixes - Fix Spacer() issue for auto layout. - Fix Capitialization for Instance Layer Names with special characters, ex: `"My Layer (not capitalized)"` → `MyLayernotCapitalized`. - Vertical Sizing on Dividers. `height` becomes modifier name instead of `frame`? ## Upcoming Features - ⏳ _`{size}` Layer Tag_ - ex: `{size max-w: fill, max-h: fill}` - ⏳ _`{params}` Layer Tag_ - Replace View Properties with Custom Parameters. - ⏳ _`{bind}` Layer Tag_ - Create a `@Binding` property for the View. - name, type, value, preview. - ⏳⏳ `safeAreaInset` - ⏳⏳ _Generator Plugins_ - ⏳⏳⏳ _Figma Component Variants & Overrides_ - Export variants as multiple components. - Export overrides as custom modifiers. - ⏳⏳⏳⏳ _More Expressive SwiftUI Output_ - ⏳⏳⏳⏳ _Parser Cleanup_ ~~~ The above material is owned by the author. This file was generated with SASHA. ~~~ ~~~ llm.txt Author: Alfred R. Duarte Domain: https://alfred.ad ~~~ ![_`Sasha` Article Transformation, **2025**_](/public/photos/sasha/sasha-article-transformation.png "Sasha Article Transformation, Alfred R. Duarte 2025") > Engineering # Static Site Generator > **April 2025** ### _GitHub_: **[`trainingmode/portfolio`](https://github.com/trainingmode/portfolio "trainingmode/portfolio: Micro static site generator.")** 1. [**Overview**](#overview) 2. [**Workflow**](#workflow) 3. [**Writing Articles**](#writing-articles) 4. [**Template System**](#template-system) 5. [**Portfolio Site**](#portfolio-site) 6. [**Benchmarks & Metrics**](#benchmarks-metrics) 7. [**Improvement Targets**](#improvement-targets) ### [[**→ *This site loads in under 0.8s on mobile, 0.2s on desktop***](https://pagespeed.web.dev/analysis/https-alfred-ad/6hw2k04xoa?form_factor=mobile "alfred.ad — Mobile Device Metrics — PageSpeed Insights")]{.highlight} (PageSpeed Insights) My portfolio site is built with a custom [static site generator](https://developer.mozilla.org/en-US/docs/Glossary/SSG "Static site generator (SSG) - MDN Web Docs Glossary: Definitions of Web-related terms | MDN"). Markdown **articles** & folder **directories** are rendered into **HTML** templates. I needed a simple system to write articles and generate a site that was effortless to maintain. My core focus was writing articles and designing a site that was **100%** accessible. What I've learned from previous projects is to reduce or avoid complexity. I would have just written pure HTML, but that makes for a horrible experience writing neatly–formatted articles. There is only **_`1`_** line of inline JavaScript on this site. It's on the email button CTA, needed to copy my email to your clipboard and close the context menu: ```javascript onclick = `navigator.clipboard.writeText("alfred.r.duarte@gmail.com");this.blur();`; ``` Everything else is just **vanilla HTML** & **Tailwind CSS**. ~[autoplay muted loop playsinline width="1400" class="rounded-lg"](/public/media/alfred-portfolio-lighthouse-metrics.mp4 "video/mp4") My [Lighthouse metrics](https://pagespeed.web.dev/analysis/https-alfred-ad/6hw2k04xoa?form_factor=mobile "alfred.ad — Mobile Metrics — PageSpeed Insights") speak for themselves. I didn't even know it had fireworks. #### Tools used: - [Cursor](https://www.cursor.com/) - [Pandoc](https://pandoc.org/) - [Bash 5](https://www.gnu.org/software/bash/manual/bash.html) - [Vercel serve](https://github.com/vercel/serve) (dev server) - [Chokidar](https://github.com/paulmillr/chokidar) - [Tailwind CSS](https://tailwindcss.com/) - [HTML](https://html.spec.whatwg.org/multipage/) - [Sketch](https://www.sketch.com/) (diagrams) # Overview Built on the **`Directory`** → **`Article`** folder structure concept. Similar to other Static Site Generators like [Jekyll](https://jekyllrb.com/ "Jekyll • Simple, blog-aware, static sites | Transform your plain text into static websites and blogs") or [Gatsby](https://www.gatsbyjs.com/ "The Best React-Based Framework | Gatsby"). It's essentially a **Markdown** converter with a simple template system. Generated sites are just HTML files, organized in the folder structure you provide in the input directory. Internally, **articles** are converted using **Pandoc**. [Pandoc converts all files to HTML by default](https://pandoc.org/MANUAL.html#specifying-formats "Specifying formats – Pandoc – Pandoc User’s Guide"). While you could write articles in any markup format that **Pandoc** supports, only **Markdown** is supported. A set of internal **plugins** preprocess special **Markdown** syntax, like **video** and **embedded iFrames**. ```md # Video Component ∼[