new_my_likes
Mix the brand new and outdated information:
deduped_my_likes
And, lastly, save the up to date information by overwriting the outdated file:
rio::export(deduped_my_likes, 'my_likes.parquet')
Step 4. View and search your information the standard manner
I prefer to create a model of this information particularly to make use of in a searchable desk. It features a hyperlink on the finish of every put up’s textual content to the unique put up on Bluesky, letting me simply view any photographs, replies, dad and mom, or threads that aren’t in a put up’s plain textual content. I additionally take away some columns I don’t want within the desk.
my_likes_for_table
mutate(
Put up = str_glue("{Put up} >>"),
ExternalURL = ifelse(!is.na(ExternalURL), str_glue("{substr(ExternalURL, 1, 25)}..."), "")
) |>
choose(Put up, Title, CreatedAt, ExternalURL)
Right here’s one solution to create a searchable HTML desk of that information, utilizing the DT bundle:
DT::datatable(my_likes_for_table, rownames = FALSE, filter="high", escape = FALSE, choices = checklist(pageLength = 25, autoWidth = TRUE, filter = "high", lengthMenu = c(25, 50, 75, 100), searchHighlight = TRUE,
search = checklist(regex = TRUE)
)
)
This desk has a table-wide search field on the high proper and search filters for every column, so I can seek for two phrases in my desk, such because the #rstats hashtag in the principle search bar after which any put up the place the textual content accommodates LLM (the desk’s search isn’t case delicate) within the Put up column filter bar. Or, as a result of I enabled common expression looking out with the search = checklist(regex = TRUE)
possibility, I might use a single regexp lookahead sample (?=.rstats)(?=.(LLM)
) within the search field.
IDG
Generative AI chatbots like ChatGPT and Claude may be fairly good at writing advanced common expressions. And with matching textual content highlights turned on within the desk, will probably be simple so that you can see whether or not the regexp is doing what you need.
Question your Bluesky likes with an LLM
The only free manner to make use of generative AI to question these posts is by importing the information file to a service of your alternative. I’ve had good outcomes with Google’s NotebookLM, which is free and exhibits you the supply textual content for its solutions. NotebookLM has a beneficiant file restrict of 500,000 phrases or 200MB per supply, and Google says it gained’t practice its massive language fashions (LLMs) in your information.
The question “Somebody talked about an R bundle with science-related colour palettes” pulled up the actual put up I used to be pondering of — one which I had appreciated after which re-posted with my very own feedback. And I didn’t have to present NotebookLLM my very own prompts or directions to inform it that I needed to 1) use solely that doc for solutions, and a couple of) see the supply textual content it used to generate its response. All I needed to do was ask my query.
IDG
I formatted the information to be a bit extra helpful and fewer wasteful by limiting CreatedAt to dates with out instances, protecting the put up URL as a separate column (as an alternative of a clickable hyperlink with added HTML), and deleting the exterior URLs column. I saved that slimmer model as a .txt and never .csv file, since NotebookLM doesn’t deal with .csv extentions.
my_likes_for_ai
mutate(CreatedAt = substr(CreatedAt, 1, 10)) |>
choose(Put up, Title, CreatedAt, URL)
rio::export(my_likes_for_ai, "my_likes_for_ai.txt")
After importing your likes file to NotebookLM, you may ask questions straight away as soon as the file is processed.
IDG
If you happen to actually needed to question the doc inside R as an alternative of utilizing an exterior service, one possibility is the Elmer Assistant, a mission on GitHub. It ought to be pretty simple to change its immediate and supply information in your wants. Nevertheless, I haven’t had nice luck working this domestically, although I’ve a reasonably sturdy Home windows PC.
Replace your likes by scheduling the script to run mechanically
With the intention to be helpful, you’ll must maintain the underlying “posts I’ve appreciated” information updated. I run my script manually on my native machine periodically once I’m energetic on Bluesky, however you too can schedule the script to run mechanically daily or as soon as per week. Listed here are three choices:
- Run a script domestically. If you happen to’re not too apprehensive about your script at all times working on an actual schedule, instruments similar to taskscheduleR for Home windows or cronR for Mac or Linux may help you run your R scripts mechanically.
- Use GitHub Actions. Johannes Gruber, the creator of the atrrr bundle, describes how he makes use of free GitHub Actions to run his R Bloggers Bluesky bot. His directions may be modified for different R scripts.
- Run a script on a cloud server. Or you could possibly use an occasion on a public cloud similar to Digital Ocean plus a cron job.
You might have considered trying a model of your Bluesky likes information that doesn’t embody each put up you’ve appreciated. Typically you might click on like simply to acknowledge you noticed a put up, or to encourage the creator that individuals are studying, or since you discovered the put up amusing however in any other case don’t count on you’ll need to discover it once more.
Nevertheless, a warning: It might probably get onerous to manually mark bookmarks in a spreadsheet if you happen to like lots of posts, and you should be dedicated to maintain it updated. There’s nothing unsuitable with looking out by means of your complete database of likes as an alternative of curating a subset with “bookmarks.”
That stated, right here’s a model of the method I’ve been utilizing. For the preliminary setup, I counsel utilizing an Excel or .csv file.
Step 1. Import your likes right into a spreadsheet and add columns
I’ll begin by importing the my_likes.parquet file and including empty Bookmark and Notes columns, after which saving that to a brand new file.
my_likes
mutate(Notes = as.character(""), .earlier than = 1) |>
mutate(Bookmark = as.character(""), .after = Bookmark)
rio::export(likes_w_bookmarks, "likes_w_bookmarks.xlsx")
After some experimenting, I opted to have a Bookmark column as characters, the place I can add simply “T” or “F” in a spreadsheet, and never a logical TRUE or FALSE column. With characters, I don’t have to fret whether or not R’s Boolean fields will translate correctly if I resolve to make use of this information exterior of R. The Notes column lets me add textual content to elucidate why I would need to discover one thing once more.
Subsequent is the guide a part of the method: marking which likes you need to maintain as bookmarks. Opening this in a spreadsheet is handy as a result of you may click on and drag F or T down a number of cells at a time. In case you have lots of likes already, this can be tedious! You can resolve to mark all of them “F” for now and begin bookmarking manually going ahead, which can be much less onerous.
Save the file manually again to likes_w_bookmarks.xlsx.
Step 2. Preserve your spreadsheet in sync together with your likes
After that preliminary setup, you’ll need to maintain the spreadsheet in sync with the information because it will get up to date. Right here’s one solution to implement that.
After updating the brand new deduped_my_likes likes file, create a bookmark verify lookup, after which be part of that together with your deduped likes file.
bookmark_check
choose(URL, Bookmark, Notes)
my_likes_w_bookmarks
relocate(Bookmark, Notes)
Now you will have a file with the brand new likes information joined together with your current bookmarks information, with entries on the high having no Bookmark or Notes entries but. Save that to your spreadsheet file.
rio::export(my_likes_w_bookmarks, "likes_w_bookmarks.xlsx")
A substitute for this considerably guide and intensive course of might be utilizing dplyr::filter()
in your deduped likes information body to take away gadgets you realize you gained’t need once more, similar to posts mentioning a favourite sports activities group or posts on sure dates when you realize you targeted on a subject you don’t must revisit.
Subsequent steps
Need to search your personal posts as effectively? You’ll be able to pull them by way of the Bluesky API in an identical workflow utilizing atrrr’s get_skeets_authored_by()
operate. When you begin down this highway, you’ll see there’s much more you are able to do. And also you’ll possible have firm amongst R customers.