Overview

 

This is for educational purposes only.  You can’t get Reddit content back once you delete it.  I’m not responsible if you delete your Reddit history, and then regret it.   This also won’t save you from law enforcement, government watchlists, and whatever other surveillance you’re under.

 The things you do and say on the internet are immediately entered into the public record, and the public record is rather fecund.  It cross-breeds and reproduces, and creates copies of itself such that the record itself becomes indelible.   In this way, the past becomes prologue, and at any time a person might be called to account for a lifetime of things said in the heat of an emotional Tweet war. 

To that end, there is no officially sanctioned way to wipe the content (posts and comments) of a Reddit account.  This is a feature, rather than a lack of feature, as Reddit relies on a backlog of content to drive engagement and advertising traffic.   While they’ll allow you to delete your account, the content created by the account persists.  They’re counting on a person being unwilling to delete each comment and post one at a time.

My own motivations for

Extracting Data From OpenAir Without API Access

I had a recent need to pull a lot of data out of OpenAir.  There was a requirement to audit some data specific to each employee of the organization.

Ordinarily this sort of task would come with API access to the system in question, and it would be fairly trivial to retrieve the required data and offload it to my workstation for the requisite processing.

Unfortunately, I do not have API access to the OpenAir instance in question. Furthermore, the instance is access through Okta, which adds an additional layer of abstraction to the issue.  Without the Okta layer in place, I might be able to goose it directly from a script. 

So how do we access hundreds of pages of data on a website that sits behind another website, and which provides no documented API access?

Let’s try Selenium.

The Okta issue is actually pretty easy to solve.  If we tell Selenium to navigate to the Okta login page, and feed the appropriate credentials to the relevant form elements, it’ll log us in to the Okta instance.

Please note that in the script below, we’re storing the credentials in a separate file

Here’s a quick one.   Fetch a list of all projects in a Jira Cloud instance, then fetch a list of all of the issues in each project.  Paginate through the resulting list of issues, and for each issue write the issue key and issue status to a CSV file.

 import requests
import json
import base64
import csv

cloud_username = "<email>"
cloud_token = "<token>"
cloud_url = "<cloud URL>"

def credentials_encode(username, password):
    credentials_string = f'{username}:{password}'
    input_bytes = credentials_string.encode('utf-8')
    encoded_bytes = base64.b64encode(input_bytes)
    encoded_string = encoded_bytes.decode('utf-8')
    return encoded_string

encoded_cloud_credentials = credentials_encode(cloud_username, cloud_token)
# Encode the credentials that we provided

request_headers = {
    'Authorization': f'Basic {encoded_cloud_credentials}',
    'Content-Type': 'application/json',
    'Accept': 'application/json',
    'X-Atlassian-token': 'no-check'
}
# Create a header object used for the HTTP GET requests

get_projects = requests.get(f"{cloud_url}/rest/api/latest/project", headers=request_headers)
# Get a list of all projects in the instance

projects_json = json.loads(get_projects.content)
# Convert the list of projects to JSON

with open('project_issues.csv', 'w', newline='') as csvfile:
    csvwriter = csv.writer(csvfile)
    # Create a CSV file

    for project in projects_json:
        # Iterate through the list of projects

        start_at = 0
        max_results = 100
        # Declare variables used in pagination

        project_key = project['key']
        # Fetch the key of the current project from the JSON

        while True:
            # Loop until 

Management of users, groups, authentication, and directories happens outside of an organization’s primary Atlassian Cloud domain.   Even if an organization uses https://org1234.atlassian.net for their Jira, all user administration happens on https://admin.atlassian.com

Atlassian has provided very little in the way of API methods by which Cloud users may be managed.  For example, the quickest way to bulk-change users from one authentication policy to another is to create a CSV, and import that CSV from the front end.   This is… not convenient.

Unlike domains at the organizational level, the Atlassian Admin portal doesn’t use a username and a token for authentication.   Instead, it uses a cloud.session.token.  When you navigate from an organizational domain to the Admin portal, this token is generated and stored as a cookie.

I haven’t yet figured out how to generate the cloud.session.token with Python.   Instead, what we’re first going to do is authenticate against the admin portal in our web browser, and then “borrow” that cookie for our script.  Here are the steps to do this:

  • Log in to the Atlassian Cloud in your browser
  • Go to https://admin.atlassian.com/
  • Right-click the page, and inspect
  • Open the network tab
  • Refresh the page
  • Locate the GET request that was sent

This script builds upon the previous script.  It authenticates against both Jira Server and Jira Cloud.  It then takes a list of Projects as input, and compares the issue count for the project on the Server and Cloud side.

This would be most useful in the case of an ongoing migration, to validate a successful transfer of data.

 import requests
import json
import base64

projects = []
#Define a list of target projects to compare

server_username = "<server username>"
server_password = "<server password>"
server_url = "<server URL>"
#Define connection parameters for the server side

cloud_username = "<Cloud username"
cloud_token = "<Cloud token>"
cloud_url = "<Cloud URL>"
#Define connection parameters for the Cloud side

server_credentials_string = f'{server_username}:{server_password}'
server_input_bytes = server_credentials_string.encode('utf-8')
server_encoded_bytes = base64.b64encode(server_input_bytes)
server_encoded_string = server_encoded_bytes.decode('utf-8')
#Encode the server credentials

cloud_credentials_string = f'{cloud_username}:{cloud_token}'
cloud_input_bytes = cloud_credentials_string.encode('utf-8')
cloud_encoded_bytes = base64.b64encode(cloud_input_bytes)
cloud_encoded_string = cloud_encoded_bytes.decode('utf-8')
#Encode the Cloud credentials

server_headers = {
    'Authorization': f'Basic {server_encoded_string}',
    'Content-Type': 'application/json',
    'Accept': 'application/json',
}
#Define the headers used to connect to the server

cloud_headers = {
    'Authorization': f'Basic {cloud_encoded_string}',
    'Content-Type': 'application/json',
    'Accept': 'application/json',
}
#Define the headers used to connect to the Cloud

#Iterate through the list of projects
for project in projects:
    server_issue_count_request = requests.get(f'{server_url}/rest/api/2/search?jql=project={project}',
                                            headers=server_headers)
    

Connecting to server and Cloud instances of Jira with Python is accomplished with much the same method and approach. The only differences between the two are that Server uses a username and password, while Cloud uses a username and token.

Generating a token is pretty straightfoward.  I recommend reading the documentation first.

The script below consists of essentially three pieces.   You define the connection parameters,  create the headers used to authenticate against the instance, and return the results of the authentication request.

The script example below returns one page of project results from each instance, just to demonstrate how it works.  If you wanted to actually work with the results, they’d need to be converted to JSON or some other format. 

The process for connecting to Confluence is the same; you need only point the script at a Confluence instance (and switch to returning some Space data or something).

 import requests
import base64

server_username = "<username>"
server_password = "<password>"
server_url = "<url>"
#Define connection parameters for the server side

cloud_username = "<Cloud login email>"
cloud_token = "<Cloud token"
cloud_url = "<Cloud url>"
#Define connection parameters for the Cloud side

server_credentials_string = f'{server_username}:{server_password}'
server_input_bytes = server_credentials_string.encode('utf-8')
server_encoded_bytes = base64.b64encode(server_input_bytes)
server_encoded_string 

Here’s a very basic example of a script to review group membership on Jira Server/DC

By first fetching the groups, and then the users in each group, we take the most efficient path toward only fetching the users who are in a group.

On the other hand, we could also tweak this script to show us users who are NOT in a group, or who are in  X or fewer groups.   That might be interesting, too.

 

 import com.atlassian.jira.component.ComponentAccessor

def groupManager = ComponentAccessor.getGroupManager()
def groups = groupManager.getAllGroups()
def sb = []
//Define a string buffer to hold the results

sb.add("<br>Group Name, Active User Count, Inactive User Count, Total User Count")
//Add a header to the buffer
groups.each{ group ->

 def activeUsers = 0
 def inactiveUsers = 0
 Each time we iterate over a new group, the count of active/inactive users gets set back to zero
 def groupMembers = groupManager.getUsersInGroup(group)
 //For each group, fetch the members of the group
    
    groupMembers.each{ member ->
    //Process each member of each group
        
    def memberDetails = ComponentAccessor.getUserManager().getUserByName(member.name)
    //We have to fetch the full user object, using the *name* attribute of the group member
        
        if(memberDetails.isActive()){
            activeUsers += 1 
        }else{
            inactiveUsers += 1
        }
    }//Increment the count of 

There’s a simple way to return a list of field configurations and field configuration schemes in Jira DC/Jira Server.  However, in order to find that information you have to know that Jira once referred to these as field layouts

Using the FieldLayoutManager class, this script returns a list of field layouts:

 import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.fields.layout.field.FieldLayoutManager

def layoutManager = ComponentAccessor.getFieldLayoutManager()
def fieldLayouts = layoutManager.getEditableFieldLayouts()
def sb = []

fieldLayouts.each{ fieldLayout ->

    sb.add("<br> ${fieldLayout.name}")

}

return sb 

 

This script returns the field layout schemes, with a simple change of the method:

 import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.fields.layout.field.FieldLayoutManager

def layoutManager = ComponentAccessor.getFieldLayoutManager()
def layoutSchemes = layoutManager.getFieldLayoutSchemes()
def sb = []

layoutSchemes.each{ layoutScheme ->

    sb.add("<br> ${layoutScheme.name}")

}

return sb 

This simple script fetches all projects, then fetches each issue in the project.  For each issue, it counts the number of attachments and adds it to a running tally for that project.

 import com.atlassian.jira.component.ComponentAccessor

def projectManager = ComponentAccessor.getProjectManager()
def projects = projectManager.getProjectObjects()
def issueManager = ComponentAccessor.getIssueManager()

projects.each{ project ->

  def attachmentsTotal = 0
  def issues = ComponentAccessor.getIssueManager().getIssueIdsForProject(project.id)

  issues.each{ issueID ->

    def issue = ComponentAccessor.getIssueManager().getIssueObject(issueID)
    def attachmentManager = ComponentAccessor.getAttachmentManager().getAttachments(issue).size()
    attachmentsTotal += attachmentManager

  }
  log.warn(project.key + " - " + attachmentsTotal)
} 

Introduction

I’ve started working on a QR-code based inventory management and pricing system.   One of the foundational elements of this system is the ability to print a price tag with a QR code on it, and to be able to update the link associated with that QR code without replacing the sticker.

This is possible if the QR code links to bit.ly instead of directly to the link in question.   So long as the shortened URL is generated under a Bitly account, it can be edited and modified after the fact.

The Bitly API is at the same time well documented, and a bit frustrating.  It’s frustrating because all of the example Python code on the internet uses the bitly_api package, which is apparently either abandoned or complete trash.   For example, all of the examples on the internet result in an error like this:

  bitly api.bitly _api.Bitly Error: "PERMANENTLY REMOVED"

 I assume this means that the method has been removed from the class or package, but I couldn’t find a way to fix it.

Instead, let’s use the https requests library to connect to the Bitly API and generate a shortened link.

Setup

First things first, you should go check