Regarding the perception of this blog, outside of the value it generates for myself, I spoke with a friend, Kevin (blog link here in the future, potentially!), and asked about his thoughts. His response was very positive and encouraging! Kevin referred me to a different blog, and suggested considering metrics for things like confidence and importance.
I really enjoy this idea, and the author of the blog in question appears to execute this very well. It would be informative to capture how I feel about what I have written, and update this belief over time with more experience and perspective. Similarly, trying to estimate how important the content within is for the reader would be an interesting exploration, and one that could be used to better calibrate my writing style and topics with time and feedback. The author of this post mentions an interactive sorting tool, Resorter, which they have written to counteract polarized results in self-evaluated ranking systems. This was the exact pitfall I was imagining experiencing myself before reading deeper into the article, and the proposed solution has piqued my curiosity!
In the context of this blog, I had just gone back through and re-edited all previous posts to try and improve the content and delivery. Some posts have had more personal/observational points, and others have had technical details requiring prior knowledge of what I have been working on to give context, and others legitimately just had notes about learning. With this in mind, I have identified a potential list of new data points to add to my logs:
- Status (Completeness)
- In Progress
- Confidence (Categorizing the type of content and my feelings about it)
- Impossible - Certain (Kesselman List of Estimative Words)
- Importance (Estimated numeric value of information to the reader)
- 0 - 10
- Context (If the contents of the article require contextual information from outside of the current entry)
- Modified Date
- date (Currently used on blogs but not logs)
I’m going to sleep on this, as there are 33 posts that will require updating and evaluation - no small time commitment. However, this feels fairly comprehensive moving forward, and allows me to capture more consistent metrics about my mental state and post status. Posts identified as anything less than “finished” can be added to Tasks, and posts with confidence levels could be reviewed after a specified interval of time since the date/modified date to re-evaluate my confidence.
On a completely different note, I had encountered two CSS games, Flexbox Froggy and Grid Garden. Since I’m trying to take it easy, but still crave trying to learn more, I knocked both of these out quickly. The way the content is presented is excellent, and I would absolutely recommend them to someone just starting out. It runs like a very condensed course, but still covers a lot of material with rapid feedback.
- => Work on the FFC CSS Grid Challenges
- => Modify CI/CD rules for Portfolio, AGWSU, and any other sites for new October 1st runner minute restrictions
- => Build only on Master commits, Dev builds must be triggered manually
- => Timeout on testing limited to 5 minutes tops and triggered manually
- => Lighthouse testing on trigger only
- 1 hours and 30 minutes