I had a recent need to pull a lot of data out of OpenAir. There was a requirement to audit some data specific to each employee of the organization.
Ordinarily this sort of task would come with API access to the system in question, and it would be fairly trivial to retrieve the required data and offload it to my workstation for the requisite processing.
Unfortunately, I do not have API access to the OpenAir instance in question. Furthermore, the instance is access through Okta, which adds an additional layer of abstraction to the issue. Without the Okta layer in place, I might be able to goose it directly from a script.
So how do we access hundreds of pages of data on a website that sits behind another website, and which provides no documented API access?
Let’s try Selenium.
The Okta issue is actually pretty easy to solve. If we tell Selenium to navigate to the Okta login page, and feed the appropriate credentials to the relevant form elements, it’ll log us in to the Okta instance.
Please note that in the script below, we’re storing the credentials in a separate file called credentials. We’re also using loguru for enhanced logging, instead of a simple print statement.
import datetime
import os
import time
import re
import webbrowser
from selenium import webdriver
from selenium.webdriver.common.by import By
import credentials
import loguru
from selenium.common.exceptions import WebDriverException
import json
from collections import defaultdict, Counter
import numpy as np
logger = loguru.logger
chromedriver_path = '<chromedriver path>'
def init_driver():
try:
options = webdriver.ChromeOptions()
options.binary_location = chromedriver_path
driver = webdriver.Chrome(options=options)
return driver
except WebDriverException as e:
logger.error(f"Failed to initialize WebDriver: {e}")
return None
driver = init_driver()
if driver is None:
logger.error("Exiting script due to WebDriver initialization failure.")
exit(1)
url = '<okta url>'
driver.get(url)
logger.info(f"Fetching Okta login url: {url}")
time.sleep(5)
username_field = driver.find_element(By.NAME, "username")
password_field = driver.find_element(By.NAME, "password")
username_field.send_keys(credentials.username)
password_field.send_keys(credentials.password)
time.sleep(3)
button_element = driver.find_element(By.ID, "okta-signin-submit")
button_element.click()
logger.info("Submitted Okta login")
This script takes credentials from a separate file, feeds them into the username and password fields, then clicks the submit button.
The recurring instances of time.sleep() are important, because it’s very easy for Selenium to get ahead of itself. You need to deliberately slow it down sometimes.
The URL to OpenAir from within Okta should be consistent, so that can be hardcoded in, and retrieved with the web driver:
driver.get("<OpenAir URL>")
logger.info("Fetching OA homepage")
time.sleep(3)
oa_url = driver.current_url.split(";")
uid = oa_url[2].replace("uid=", "")
The second half of this snippet is the way in which the UID for the current OpenAir login session is fetched. Each time a person logs in to OpenAir, they’re assigned a different UID. This UID is required to fetch any resources from within OpenAir. Fortunately, it’s part of the URL on the homepage, and can therefor be extracted.
Here’s an example. Say I wanted to go to the Resources page of OpenAir. The URL looks like this:https://<OA URL>.app.openair.com/webapi/v2/list/resource@resource/data?uid=xxxxxxxxxx&app=rm&page=1
The UID must be known, in order to submit a valid GET request through the browser. If we extract the UID from the URL of the homepage, we can make this work.
driver.get(f"<OA URL>.app.openair.com/webapi/v2/list/resource@resource/data?uid={uid}&app=rm&page=1")
logger.info(f"Fetching resources URL: https://<OA URL>.app.openair.com/webapi/v2/list/resource@resource/data?uid={uid}&app=rm&page=1")
time.sleep(3)
page_text = driver.page_source
And with that, we’ve successfully fetched the first page of results for the Resources section of OpenAir, without external API access.
So now what?
The format of the page returned by the (undocumented) API endpoint above appears to be JSON, but it’s actually JSON wrapped in HTML. For that reason, we need to extract the JSON before we can parse it:try:
json_match = re.search(r'({.*})', page_text, re.DOTALL)
if json_match:
json_content = json.loads(json_match.group(1))
meta = json_content.get("meta")
total_pages = meta["total_pages"]
logger.info(f"Found a total resource page count of {total_pages}")
else:
logger.warning("No JSON content found")
except json.JSONDecodeError:
logger.error("Failed to decode JSON")
page_count = 1
all_user_data = []
while True:
driver.get(f"<OA URL>.app.openair.com/webapi/v2/list/resource@resource/data?uid={uid}&app=rm&page={page_count}")
logger.info(f"Fetching resource page: <OA URL>.app.openair.com/webapi/v2/list/resource@resource/data?uid={uid}&app=rm&page={page_count}")
driver.implicitly_wait(5)
page_text = driver.page_source
json_match = re.search(r'({.*})', page_text, re.DOTALL)
if json_match:
json_content = json.loads(json_match.group(1))
logger.info(f"Found JSON for resource page {page_count}")
initial_user_data = json_content.get("data", [])
all_user_data.extend(initial_user_data) # Collecting all user data
page_count += 1
if total_pages and page_count > total_pages:
break
It’s first extracting the JSON from page #1, and looking for the value of the total_pages attribute. Using that value, it then paginates through all of the pages of results, and extracts the JSON from each page.
Here’s the full script, in case you’re curious:
import datetime
import os
import time
import re
import webbrowser
from selenium import webdriver
from selenium.webdriver.common.by import By
import credentials
import loguru
from selenium.common.exceptions import WebDriverException
import json
from collections import defaultdict, Counter
import numpy as np
logger = loguru.logger
chromedriver_path = '<path to ChromeDriver>'
json_filename = f"oa_skills_export_{datetime.datetime.today().date()}.json"
oa_url = "<okta subdomain url>"
okta_url = "<okta url>"
okta_oa_url = "<url from Okta to OpenAir"
def init_driver():
try:
options = webdriver.ChromeOptions()
options.binary_location = chromedriver_path
driver = webdriver.Chrome(options=options)
return driver
except WebDriverException as e:
logger.error(f"Failed to initialize WebDriver: {e}")
return None
driver = init_driver()
if driver is None:
logger.error("Exiting script due to WebDriver initialization failure.")
exit(1)
driver.get(okta_url)
time.sleep(5)
username_field = driver.find_element(By.NAME, "username")
password_field = driver.find_element(By.NAME, "password")
username_field.send_keys(credentials.username)
password_field.send_keys(credentials.password)
time.sleep(3)
button_element = driver.find_element(By.ID, "okta-signin-submit")
button_element.click()
time.sleep(3)
driver.get(f"{okta_oa_url}")
logger.info(f"Fetching OA homepage: {okta_oa_url}")
time.sleep(3)
oa_homepage_url = driver.current_url.split(";")
uid = oa_url[2].replace("uid=", "")
driver.get(f"https://{oa_url}.app.openair.com/webapi/v2/list/resource@resource/data?uid={uid}&app=rm&page=1")
logger.info(f"Fetching resources URL: https://{oa_url}.app.openair.com/webapi/v2/list/resource@resource/data?uid={uid}&app=rm&page=1")
time.sleep(3)
page_text = driver.page_source
try:
json_match = re.search(r'({.*})', page_text, re.DOTALL)
if json_match:
json_content = json.loads(json_match.group(1))
meta = json_content.get("meta")
total_pages = meta["total_pages"]
logger.info(f"Found a total resource page count of {total_pages}")
else:
logger.warning("No JSON content found")
except json.JSONDecodeError:
logger.error("Failed to decode JSON")
page_count = 1
all_user_data = []
while True:
driver.get(f"https://{oa_url}.app.openair.com/webapi/v2/list/resource@resource/data?uid={uid}&app=rm&page={page_count}")
logger.info(f"Fetching resource page: https://{oa_url}.app.openair.com/webapi/v2/list/resource@resource/data?uid={uid}&app=rm&page={page_count}")
driver.implicitly_wait(5)
page_text = driver.page_source
json_match = re.search(r'({.*})', page_text, re.DOTALL)
if json_match:
json_content = json.loads(json_match.group(1))
logger.info(f"Found JSON for resource page {page_count}")
initial_user_data = json_content.get("data", [])
all_user_data.extend(initial_user_data) # Collecting all user data
page_count += 1
if total_pages and page_count > total_pages:
break
# Reinitialize the driver for fetching detailed user info
# Fetch detailed user info and update the user data
for user in all_user_data:
username = user.get("username", {}).get("props", {}).get("value")
user_url = user.get("username", {}).get("props", {}).get("url")
if user_url:
logger.info(f'Fetching details for user: {username} from URL: {user_url}')
driver.get(user_url)
driver.implicitly_wait(5)
user_data_text = driver.page_source
user_json_match = re.search(r'({.*})', user_data_text, re.DOTALL)
if user_json_match:
user_json_content = json.loads(user_json_match.group(1))
user['detailed_info'] = user_json_content
logger.info(f'Added detailed info for user: {username}')
driver.quit()
with open(json_filename, "w") as json_file:
json.dump(all_user_data, json_file, indent=4)
print(f"Combined user data with line managers saved to {json_filename}")
# Define a path to the existing JSON file:
json_file_path = json_filename
# Read the JSON file
# Read the JSON file
with open(json_file_path, 'r') as file:
user_json_data = json.load(file)
line_managers = []
users_without_manager = []
unique_skills = []
# Collecting line managers and users without managers
for user in user_json_data:
username = user["username"]["props"]["value"]
if "line_manager" in user and user["line_manager"]:
line_manager = user["line_manager"]["props"]["value"]
else:
line_manager = None
if line_manager:
if line_manager not in line_managers:
line_managers.append(line_manager)
else:
users_without_manager.append(username)
department_skills_count = defaultdict(lambda: defaultdict(set))
for user in user_json_data:
department = user["department"]
if "detailed_info" in user and user["detailed_info"] and "skills" in user["detailed_info"]:
skills = user["detailed_info"]["skills"]
if skills is not None:
for skill_type, skill_list in skills["types"].items():
for skill in skill_list:
department_skills_count[department][skill_type].add(skill["name"])
for user in user_json_data:
try:
skills = user["detailed_info"]["skills"]['types']['Skills']
if skills is not None:
for skill in skills:
unique_skills.append(skill['name'])
except:
pass
values, counts = np.unique(unique_skills, return_counts=True)
skill_count_dict = dict(zip(values, counts))
skill_to_users = defaultdict(list)
for user in user_json_data:
username = user["username"]["props"]["value"]
if "detailed_info" in user and user["detailed_info"] and "skills" in user["detailed_info"]:
skills = user["detailed_info"]["skills"]
if skills is not None:
for skill_type, skill_list in skills["types"].items():
for skill in skill_list:
skill_to_users[skill["name"]].append(username)
So what’s the point? The point is that there are limitations, and then there are limitations. Just because a system is nominally set up a certain way, with apparent guard rails and restrictions, does not mean you have to respect those limitations.
Here’s a quick one. Fetch a list of all projects in a Jira Cloud instance, then fetch a list of all of the issues in each project. Paginate through the resulting list of issues, and for each issue write the issue key and issue status to a CSV file.
import requests
import json
import base64
import csv
cloud_username = "<email>"
cloud_token = "<token>"
cloud_url = "<cloud URL>"
def credentials_encode(username, password):
credentials_string = f'{username}:{password}'
input_bytes = credentials_string.encode('utf-8')
encoded_bytes = base64.b64encode(input_bytes)
encoded_string = encoded_bytes.decode('utf-8')
return encoded_string
encoded_cloud_credentials = credentials_encode(cloud_username, cloud_token)
# Encode the credentials that we provided
request_headers = {
'Authorization': f'Basic {encoded_cloud_credentials}',
'Content-Type': 'application/json',
'Accept': 'application/json',
'X-Atlassian-token': 'no-check'
}
# Create a header object used for the HTTP GET requests
get_projects = requests.get(f"{cloud_url}/rest/api/latest/project", headers=request_headers)
# Get a list of all projects in the instance
projects_json = json.loads(get_projects.content)
# Convert the list of projects to JSON
with open('project_issues.csv', 'w', newline='') as csvfile:
csvwriter = csv.writer(csvfile)
# Create a CSV file
for project in projects_json:
# Iterate through the list of projects
start_at = 0
max_results = 100
# Declare variables used in pagination
project_key = project['key']
# Fetch the key of the current project from the JSON
while True:
# Loop until pagination is complete
get_project_issues = requests.get(f'{cloud_url}/rest/api/latest/search?jql=project="{project_key}"&maxResults={max_results}&startAt={start_at}', headers=request_headers)
# Use the project key to get the issues for the project in question
project_issues_json = json.loads(get_project_issues.content)
# Load the results of the project issue query into JSON
print(f'{cloud_url}/rest/api/latest/search?jql=project="{project_key}"&maxResults={max_results}&startAt={start_at}')
# Print a statement that tracks where pagination is at for each project, so we know that it's actually doing something
if str(get_project_issues.content).__contains__("does not exist for the field"):
# Some projects are archived, and therefor don't play nice
# If the system can't find the project, just end the loop
break
for issue in project_issues_json['issues']:
# Iterate through the issues for this page of results
fields = issue['fields']
csvwriter.writerow([f"{issue['key']} - {fields['status']['name']}"])
# Write the issue key and issue status to the CSV we currently have open
start_at += max_results
# Increment the pagination
if start_at > project_issues_json['total']:
# If the pagination for this project has reached the end,
# break the loop and start on the next project
print(f"{project_key} has {project_issues_json['total']} issues")
break
Management of users, groups, authentication, and directories happens outside of an organization’s primary Atlassian Cloud domain. Even if an organization uses https://org1234.atlassian.net for their Jira, all user administration happens on https://admin.atlassian.com
Atlassian has provided very little in the way of API methods by which Cloud users may be managed. For example, the quickest way to bulk-change users from one authentication policy to another is to create a CSV, and import that CSV from the front end. This is… not convenient.
Unlike domains at the organizational level, the Atlassian Admin portal doesn’t use a username and a token for authentication. Instead, it uses a cloud.session.token. When you navigate from an organizational domain to the Admin portal, this token is generated and stored as a cookie.
I haven’t yet figured out how to generate the cloud.session.token with Python. Instead, what we’re first going to do is authenticate against the admin portal in our web browser, and then “borrow” that cookie for our script. Here are the steps to do this:
Now we have the value we need to authenticate against the Admin portal.
Here’s some sample code that authenticates against the Admin Portal. You’ll need to supply your own Organizational ID. If you can’t find that, you should definitely NOT be running this script.
import requests
import loguru
org_id = "<org_id>"
logger = loguru.logger
cookies = {
'cloud.session.token': '<cloud.session.token>'
}
admin_portal_get_request = requests.get(f"https://admin.atlassian.com/gateway/api/adminhub/um/site/{org_id}/users", cookies=cookies)
logger.info(admin_portal_get_request.status_code)
logger.info(admin_portal_get_request.content)
logger.info(admin_portal_get_request.cookies)
There’s an easier way to accomplish the same thing. After logging in to the Admin Portal, as above, you can use the browsercookie package to borrow the cookie already stored in your browser. The example below uses Firefox:
import requests
import browsercookie
import loguru
logger = loguru.logger
org_id = "<org ID>"
url = f'https://admin.atlassian.com/gateway/api/adminhub/um/site/{org_id}/users'
cookies = browsercookie.firefox() # Change to browser of your choice
response = requests.get(url, cookies=cookies)
logger.info(admin_portal_get_request.status_code)
logger.info(admin_portal_get_request.content)
logger.info(admin_portal_get_request.cookies)
This script builds upon the previous script. It authenticates against both Jira Server and Jira Cloud. It then takes a list of Projects as input, and compares the issue count for the project on the Server and Cloud side.
This would be most useful in the case of an ongoing migration, to validate a successful transfer of data.
import requests
import json
import base64
projects = []
#Define a list of target projects to compare
server_username = "<server username>"
server_password = "<server password>"
server_url = "<server URL>"
#Define connection parameters for the server side
cloud_username = "<Cloud username"
cloud_token = "<Cloud token>"
cloud_url = "<Cloud URL>"
#Define connection parameters for the Cloud side
server_credentials_string = f'{server_username}:{server_password}'
server_input_bytes = server_credentials_string.encode('utf-8')
server_encoded_bytes = base64.b64encode(server_input_bytes)
server_encoded_string = server_encoded_bytes.decode('utf-8')
#Encode the server credentials
cloud_credentials_string = f'{cloud_username}:{cloud_token}'
cloud_input_bytes = cloud_credentials_string.encode('utf-8')
cloud_encoded_bytes = base64.b64encode(cloud_input_bytes)
cloud_encoded_string = cloud_encoded_bytes.decode('utf-8')
#Encode the Cloud credentials
server_headers = {
'Authorization': f'Basic {server_encoded_string}',
'Content-Type': 'application/json',
'Accept': 'application/json',
}
#Define the headers used to connect to the server
cloud_headers = {
'Authorization': f'Basic {cloud_encoded_string}',
'Content-Type': 'application/json',
'Accept': 'application/json',
}
#Define the headers used to connect to the Cloud
#Iterate through the list of projects
for project in projects:
server_issue_count_request = requests.get(f'{server_url}/rest/api/2/search?jql=project={project}',
headers=server_headers)
server_project_issue_json = json.loads(server_issue_count_request.content)
server_project_issue_count = server_project_issue_json.get("total")
#For each server project, fetch the total issue count
cloud_issue_count_request = requests.get(f'{cloud_url}/rest/api/3/project/search?key={project}&expand=insight',
headers=cloud_headers)
cloud_project_issue_json = json.loads(cloud_issue_count_request.content)
cloud_project_issue_count = cloud_project_issue_json['values'][0]['insight']['totalIssueCount']
#For each Cloud project, fetch the total issue count
print(f"Project {project} has {server_project_issue_count} on the server side and {cloud_project_issue_count} "
f"on the cloud side")
Connecting to server and Cloud instances of Jira with Python is accomplished with much the same method and approach. The only differences between the two are that Server uses a username and password, while Cloud uses a username and token.
Generating a token is pretty straightfoward. I recommend reading the documentation first.
The script below consists of essentially three pieces. You define the connection parameters, create the headers used to authenticate against the instance, and return the results of the authentication request.
The script example below returns one page of project results from each instance, just to demonstrate how it works. If you wanted to actually work with the results, they’d need to be converted to JSON or some other format.
The process for connecting to Confluence is the same; you need only point the script at a Confluence instance (and switch to returning some Space data or something).import requests
import base64
server_username = "<username>"
server_password = "<password>"
server_url = "<url>"
#Define connection parameters for the server side
cloud_username = "<Cloud login email>"
cloud_token = "<Cloud token"
cloud_url = "<Cloud url>"
#Define connection parameters for the Cloud side
server_credentials_string = f'{server_username}:{server_password}'
server_input_bytes = server_credentials_string.encode('utf-8')
server_encoded_bytes = base64.b64encode(server_input_bytes)
server_encoded_string = server_encoded_bytes.decode('utf-8')
#Encode the server credentials
cloud_credentials_string = f'{cloud_username}:{cloud_token}'
cloud_input_bytes = cloud_credentials_string.encode('utf-8')
cloud_encoded_bytes = base64.b64encode(cloud_input_bytes)
cloud_encoded_string = cloud_encoded_bytes.decode('utf-8')
#Encode the Cloud credentials
server_headers = {
'Authorization': f'Basic {server_encoded_string}',
'Content-Type': 'application/json',
'Accept': 'application/json',
}
#Define the headers used to connect to the server
cloud_headers = {
'Authorization': f'Basic {cloud_encoded_string}',
'Content-Type': 'application/json',
'Accept': 'application/json',
}
#Define the headers used to connect to the Cloud
server_login_request = requests.get(f'{server_url}/rest/api/2/project', headers=server_headers)
cloud_login_request = requests.get(f'{cloud_url}/rest/api/2/project', headers=cloud_headers)
#Initiate the connection requests to the server and cloud instances
print(f"Server connection HTTP status: {server_login_request.status_code}")
print(f"Cloud connection HTTP status: {cloud_login_request.status_code}")
print(f"Server response content: \n{server_login_request.content}")
print(f"Cloud response content: \n{cloud_login_request.content}")
#Print some data to confirm successful connection
Here’s a very basic example of a script to review group membership on Jira Server/DC
By first fetching the groups, and then the users in each group, we take the most efficient path toward only fetching the users who are in a group.
On the other hand, we could also tweak this script to show us users who are NOT in a group, or who are in X or fewer groups. That might be interesting, too.
import com.atlassian.jira.component.ComponentAccessor
def groupManager = ComponentAccessor.getGroupManager()
def groups = groupManager.getAllGroups()
def sb = []
//Define a string buffer to hold the results
sb.add("<br>Group Name, Active User Count, Inactive User Count, Total User Count")
//Add a header to the buffer
groups.each{ group ->
def activeUsers = 0
def inactiveUsers = 0
Each time we iterate over a new group, the count of active/inactive users gets set back to zero
def groupMembers = groupManager.getUsersInGroup(group)
//For each group, fetch the members of the group
groupMembers.each{ member ->
//Process each member of each group
def memberDetails = ComponentAccessor.getUserManager().getUserByName(member.name)
//We have to fetch the full user object, using the *name* attribute of the group member
if(memberDetails.isActive()){
activeUsers += 1
}else{
inactiveUsers += 1
}
}//Increment the count of inactive or active users, depending on the current user's status
sb.add("<br>"+group.name + ", " + activeUsers + ", " + inactiveUsers+ ", " + (activeUsers + inactiveUsers))
//Add the results to the buffer
}
return sb
//Return the results
There’s a simple way to return a list of field configurations and field configuration schemes in Jira DC/Jira Server. However, in order to find that information you have to know that Jira once referred to these as field layouts.
Using the FieldLayoutManager class, this script returns a list of field layouts:
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.fields.layout.field.FieldLayoutManager
def layoutManager = ComponentAccessor.getFieldLayoutManager()
def fieldLayouts = layoutManager.getEditableFieldLayouts()
def sb = []
fieldLayouts.each{ fieldLayout ->
sb.add("<br> ${fieldLayout.name}")
}
return sb
This script returns the field layout schemes, with a simple change of the method:
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.fields.layout.field.FieldLayoutManager
def layoutManager = ComponentAccessor.getFieldLayoutManager()
def layoutSchemes = layoutManager.getFieldLayoutSchemes()
def sb = []
layoutSchemes.each{ layoutScheme ->
sb.add("<br> ${layoutScheme.name}")
}
return sb
This simple script fetches all projects, then fetches each issue in the project. For each issue, it counts the number of attachments and adds it to a running tally for that project.
import com.atlassian.jira.component.ComponentAccessor
def projectManager = ComponentAccessor.getProjectManager()
def projects = projectManager.getProjectObjects()
def issueManager = ComponentAccessor.getIssueManager()
projects.each{ project ->
def attachmentsTotal = 0
def issues = ComponentAccessor.getIssueManager().getIssueIdsForProject(project.id)
issues.each{ issueID ->
def issue = ComponentAccessor.getIssueManager().getIssueObject(issueID)
def attachmentManager = ComponentAccessor.getAttachmentManager().getAttachments(issue).size()
attachmentsTotal += attachmentManager
}
log.warn(project.key + " - " + attachmentsTotal)
}
I’ve started working on a QR-code based inventory management and pricing system. One of the foundational elements of this system is the ability to print a price tag with a QR code on it, and to be able to update the link associated with that QR code without replacing the sticker.
This is possible if the QR code links to bit.ly instead of directly to the link in question. So long as the shortened URL is generated under a Bitly account, it can be edited and modified after the fact.
The Bitly API is at the same time well documented, and a bit frustrating. It’s frustrating because all of the example Python code on the internet uses the bitly_api package, which is apparently either abandoned or complete trash. For example, all of the examples on the internet result in an error like this:
bitly api.bitly _api.Bitly Error: "PERMANENTLY REMOVED"
I assume this means that the method has been removed from the class or package, but I couldn’t find a way to fix it.
Instead, let’s use the https requests library to connect to the Bitly API and generate a shortened link.
First things first, you should go check out the Bitly API documentation, linked here.
Second, install the http requests package in your Python environment if you haven’t already done so. With Pip it would look like pip install requests.
Third, generate a Bitly access token. Documentation on this can be found here (it’s not difficult).
Fourth, grab the group ID of your Bitly account. If you go into your Bitly account settings and then groups, the URL will look something like this:
https://app.bitly.com/settings/organization/Ox1nx1X1pxX/groups/Bo1ntGXNMqT
The bolded bit at the end that looks like Bo1ntGXNMqT
is the group ID you’re after. Yours will be different.
The code isn’t terribly complicated. We’re making a POST request to the API url, and feeding it the bare minimum JSON. From the content that is returned, we’re selecting the link parameter.
The long_url value is the one that needs to change, each time you want to shorten a new URL.
You also need to supply the bearer token (access token) and the group ID. Remove the squiggly brackets.
import requests
import json
bitlyAPIDomain = "https://api-ssl.bitly.com/v4/shorten"
headers = {
'Authorization': 'Bearer {TOKEN}',
'Content-Type': 'application/json'
}
payload = {
"group_guid": "{group ID}",
"domain": "bit.ly",
"long_url": "https://dev.bitly.com"
}
json_payload = json.dumps(payload)
post = json.loads(requests.post(bitlyAPIDomain, headers=headers, data=json_payload).content)
link = post.get("link")
print(link)
I wanted to create an interface in Python that had a row of icons at the top. Depending on the screen being displayed, I wanted one of those icons to be highlighted or a different color than the others.
This proved to be more challenging than I expected. You can set the color of all of the icons, but setting the color of a single one is a different story.
The solution that I came up with was to create a function that sets the target icon color when the program loads, rather than trying to do it as part of a KivyMD attribute of the TopAppBar widget.
def set_icon_color(self, dt):
screen_1_icon = self.screen_1.ids.menu_app_bar.ids.right_actions.children
#Tell the method where to find the homescreen icon
screen_1_icon[0].theme_icon_color = "Custom"
screen_1_icon[0].text_color = "00ADB5"
#define what the icon should look like
screen_2_icon = self.screen_2.ids.menu_app_bar.ids.right_actions.children
screen_2_icon[1].theme_icon_color = "Custom"
screen_2_icon[1].text_color = "00ADB5"
screen_2_icon = self.screen_3.ids.menu_app_bar.ids.right_actions.children
screen_3_icon[2].theme_icon_color = "Custom"
screen_3_icon[2].text_color = "00ADB5"
We call this function on program load. This sets the color of the target individual icon on each screen, without affecting the others. When I switch to any given screen, the appropriate icon is already highlighted with a distinct color.
The other thing I had to do was delay the calling of this function when the program loaded. Initially the function simply loaded when the program started, but this lead to the icons often not being highlighted because the function had fired before the icons were fully drawn.
I resolved this by using Kivy’s clock method:
Clock.schedule_icon_change(self.set_icon_color, 3)
This delays the adjusting of the icons by 3 seconds, which was all that was necessary to resolve the issue.
Once upon a time, things were simple. If you wanted to retrieve a page using the Confluence Java API, you simply called getPage(). Fetching Spaces was similarly easy, and intuitive.
Those days are over. The methods are deprecated. Instead, we now need to use SpaceService and ContentService to manage spaces and content, respectively. Let’s take a look at some examples of how a task would have been accomplished with the PageManager and SpaceManager, and compare that to how those tasks would be accomplished today.
import com.atlassian.confluence.pages.PageManager
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.confluence.spaces.SpaceManager
def spaceManager = ComponentLocator.getComponent(SpaceManager)
def pageManager = ComponentLocator.getComponent(PageManager)
def spaces = spaceManager.getAllSpaces()
spaces.each { space ->
def pagesInSpace = pageManager.getPages(space, true)
pagesInSpace.each { page ->
log.warn(page.getBodyAsString())
}
}
Here’s the same code, using the SpaceService and ContentService classes:
import com.atlassian.confluence.api.model.Expansions
import com.atlassian.confluence.api.model.content.ContentRepresentation
import com.atlassian.confluence.api.model.content.ContentBody
import com.atlassian.confluence.api.model.content.Content
import com.atlassian.confluence.api.model.content.Space
import com.atlassian.confluence.api.model.Expansion
import com.atlassian.confluence.api.model.pagination.PageResponse
import com.atlassian.confluence.api.service.content.SpaceService
import com.atlassian.confluence.api.service.content.ContentService
import com.atlassian.confluence.api.model.content.ContentType
import com.onresolve.scriptrunner.runner.ScriptRunnerImpl
import com.atlassian.confluence.api.model.pagination.SimplePageRequest
def contentService = ScriptRunnerImpl.getPluginComponent(ContentService)
def spaceService = ScriptRunnerImpl.getPluginComponent(SpaceService)
SimplePageRequest pageRequest = new SimplePageRequest(0, 10)
PageResponse < Space > spaceResults = spaceService.find(new Expansion('name')).fetchMany(new SimplePageRequest(0, 10))
List < Space > spaces = spaceResults.getResults()
spaces.each {space ->
def pageResult = contentService.find(new Expansion(Content.Expansions.BODY, new Expansions(new Expansion("storage"))))
.withSpace(space)
.fetchMany(ContentType.PAGE, pageRequest)
List pages = pageResult.getResults()
pages.each {page ->
ContentBody body = page.getBody().get(ContentRepresentation.STORAGE);
log.warn(body.getValue())
}
}
After the very long list of imports, and the declaration of the space and content services, we create a SimplePageRequest object. The parameters that this takes, 0 and 10, are the start and the limit of pagination, respectively. That is, the request starts at index 0, and increments by 10.
This provides methods by which search criteria can be included with the find() statement.
After that runs, we have a list of spaces within the instance that match our criteria. We iterate through those spaces with a closure, and for each space we use the ContentService to retrieve content relating to that space. Worth noting is that you don’t need to start with the SpaceService; if you have a single space object, or some other criteria, you could use that as a .with() parameter attached to the ContentService.
The structure of the .find() statement is such that we feed must tell it which details we’d like to expand on; Atlassian calls these expansions. In effect, we tell the .find() statement which details about the content we’d actually like to know about. If we don’t tell the statement which expansions we’d like to use, the information will likely not be returned by the statement.
Finally, and I must stress this, we need to call for ContentRepresentation.STORAGE if we want to actually see the contents of the body. This information was incredibly difficult to find. You can call page.getBody() by itself as much as you like, but it’ll return an empty map.
I’m certain there are aspects of this service model that I’ve not touched on or encountered yet. If you have any suggestions for other people looking to make use of these services, please leave a comment.
This script fetches all of the projects in a Jira Cloud instance. It then fetches all of the project roles for that project, and finally fetches all of the users in that role for that project. In this way, it iterates through the projects and returns information about the users in the project roles.
import groovy.json.JsonSlurper
def sb = []
//Define a string buffer to hold the results
def getUsers = get("/rest/api/2/project")
.header('Content-Type', 'application/json')
.asJson()
//Get the list of projects in the instance
def content = getUsers.properties.rawBody
//Get the raw body contents of the HTTP response
def scanner = new java.util.Scanner(content).useDelimiter("\\A")
String rawBody = scanner.hasNext() ? scanner.next() : ""
def json = new JsonSlurper().parseText(rawBody)
//Turn the raw body contents into JSON
json.each{ project ->
//Iterate through the projects
sb.add("$project.name")
def getRoles = get("/rest/api/2/project/$project.id/role")
.header('Content-Type', 'application/json')
.asObject(Map)
//For each project, get the list of roles
getRoles.body.each{ projectRole ->
//Iterate through the project roles
def getRoleMembers = get("$projectRole.value")
.header('Content-Type', 'application/json')
.asObject(Map)
//Return the details about each role
getRoleMembers.body.actors.each{ roleMember ->
//Get all the actors (users) in that role
sb.add("$getRoleMembers.body.name: $roleMember.displayName")
}
}
}
return sb
//Return the results
If you dive into the world of REST requests and APIs, you may encounter a CORS error that prevents your request from completing. CORS stands for Cross-Origin Resource Sharing. Same-origin is a security feature in browsers that prevents requests coming from one place (origin) to access resources in a different domain. CORS allows web pages to access resources on a different network by providing a standard for safely allowing cross-origin requests.
Let’s talk about the example that I encountered. I wrote a JavaScript macro for Confluence Server, and I was trying to access a third-party API using that macro. However, Confluence macros run in the browser when the page loads, rather than running on the back-end Confluence server itself. Thus, while the Confluence server may be set up to address CORS, your browser almost certainly is not, and the request gets blocked.
We can address this by creating a custom REST API endpoint in Confluence (or Jira). In this way, we have the server making the request to the third party API, and the macro makes the request to the internal API.
In other words, the custom REST API endpoint acts like a middleman that doesn’t trip the CORS errors.In order to set up a custom REST API in Confluence or Jira, you’ll need ScriptRunner. You’ll also need the URL of the REST API you’d like to connect to, and the credentials to access it.
REST APIs are a function of ScriptRunner:
Add a new endpoint, and choose “Custom Endpoint” as the type. Finally, give it a name and provide the code. Here’s the basic framework of some code that will run as a REST API, courtesy of the Adaptavist Script Library:
import com.onresolve.scriptrunner.runner.rest.common.CustomEndpointDelegate
import groovy.json.JsonBuilder
import groovy.json.JsonSlurper
import groovy.transform.BaseScript
import javax.ws.rs.core.MultivaluedMap
import javax.ws.rs.core.Response
import java.time.LocalDate
import java.time.format.DateTimeFormatter
@BaseScript CustomEndpointDelegate delegate
doSomething(httpMethod: "GET") { MultivaluedMap queryParams, String body ->
//Define the URL of the third-party API with the dynamic dates
def apiUrl = "<API URL>"
//Define the username and password for basic authentication
def username = "<username>"
def password = "<password>"
//Encode the username and password
def encodedCredentials = Base64.getEncoder().encodeToString("$username:$password".getBytes())
//Make a request to the third-party API with basic authentication
def connection = new URL(apiUrl).openConnection() as HttpURLConnection
connection.setRequestProperty("Authorization", "Basic $encodedCredentials")
connection.setRequestProperty("Accept", "application/json")
connection.connect()
def responseCode = connection.responseCode
def headers = connection.getHeaderFields()
def responseBody = connection.inputStream.text
connection.disconnect()
//Log the response details
logResponse(responseCode, headers, responseBody)
//Process the API response
def result = processApiResponse(responseBody)
//Return the processed result as the response
return Response.ok(new JsonBuilder(result).toString())
.header("Content-Type", "application/json")
.header('Accept', 'application/json')
.build()
}
//Log the response details
def logResponse(int responseCode, Map<String, List<String>> headers, String responseBody) {
log.warn("Response Code: $responseCode")
headers.each { headerName, headerValues ->
log.warn("Header: $headerName - ${headerValues.join(', ')}")
}
log.warn("Response Body: $responseBody")
}
//Process the response from the third-party API
def processApiResponse(String apiResponse) {
def responseJson = new JsonSlurper().parseText(apiResponse)
//Extract the desired data from the API response
def extractedData = responseJson
//Perform any additional processing or transformation as needed
//Return the processed data
return [data: extractedData]
}
In this case we call the REST endpoint doSomething, and we access it by calling /rest/scriptrunner/latest/custom/doSomething. Because the credentials are stored in the code for the endpoint, the macro that calls the endpoint doesn’t need to worry about authenticating. The most basic example of a script in Confluence DC using a custom REST API endpoint is this:
import org.apache.http.HttpResponse
import org.apache.http.client.methods.HttpGet
import org.apache.http.impl.client.CloseableHttpClient
import org.apache.http.impl.client.HttpClientBuilder
import org.apache.http.util.EntityUtils
import javax.xml.bind.DatatypeConverter
// Create an HTTP client
CloseableHttpClient httpClient = HttpClientBuilder.create().build()
// Define the URL for the HTTP request
String url = "https://<Confluence DC URL>/rest/scriptrunner/latest/custom/doSomething"
try {
// Create an HTTP GET request
HttpGet httpGet = new HttpGet(url)
// Execute the request and retrieve the response
HttpResponse response = httpClient.execute(httpGet)
// Get the response body as a String
String responseBody = EntityUtils.toString(response.getEntity())
// Print the response body
log.warn "Response: $responseBody"
} finally {
// Close the HTTP client
httpClient.close()
}
This calls the REST API endpoint stored on the Confluence Server, which returns the call made to the third-party API as JSON. From there, you can do whatever you’d normally do with a blob of JSON.
Tempo Planner allows for planning team capacity and schedules within Jira. However, you may have some need to pull that resource planning information out of the Tempo interface and add it to a ticket.
The Tempo API has some severe limitations, but where there’s a will there’s a way.
The first thing we’ll examine is how to get information on all of the teams in Tempo Planner. According the documentation, this isn’t possible. Per the API documentation, you can return limited very information about plans and allocations.
Naturally I found this to be unacceptable, and I figured out a way to have the API return all of the teams. One of the undocumented API endpoints is a search function: /rest/tempo-teams/3/search. One of the tricks to using this method is that it’s not a GET, it’s a POST, so we have to supply a search parameter as a payload. When we POST to this endpoint, we supply some JSON: {“teamSearchString”:”<string>”}. But here’s the rub: the API will accept an empty search string, and return all of the teams as a result.
Much like team info, there is no public Tempo API endpoint that will return all of the plans or allocations in the Jira instance. However, thanks to some information from 0xVIC, I found an endpoint that will work for this purpose. If we POST to /rest/tempo-planning/1/plan/search, we get back information about allocations in a given period of time.
The payload for the POST looks like this: {“from”: “2023-01-01”, “to”:”2023-12-01″}. If the end date isn’t specified, it only looks ahead by one month.
Each object in the JSON that is returned by this query is an allocation. If you take the allocation ID and append it to the REST API like rest/tempo-planning/1/allocation/2, where 2 is the allocation ID, you’ll get back all the information about that allocation.
From that same GitHub repository, here are some Tempo-related REST API endpoints that might be useful to you.
/rest/tempo-accounts/1
/rest/tempo-accounts/1/account/key/
/rest/tempo-accounts/1/import/service
/rest/tempo-accounts/1/ratetable/currency
/rest/tempo-accounts/2
/rest/tempo-core/1/activitysources
/rest/tempo-core/1/analytics/track
/rest/tempo-core/1/expense
/rest/tempo-core/1/expense/category
/rest/tempo-core/1/favorites/
/rest/tempo-core/1/filter/?
/rest/tempo-core/1/filter/my?includeFavourites=
/rest/tempo-core/1/globalconfiguration/
/rest/tempo-core/1/issues
/rest/tempo-core/1/jira-properties/
/rest/tempo-core/1/plugin-info/plugin/is.origo.jira.tempo-plugin
/rest/tempo-core/1/project/config/
/rest/tempo-core/1/saved-reports
/rest/tempo-core/1/saved-reports/drafts
/rest/tempo-core/1/user/schedule
/rest/tempo-core/1/users/search
/rest/tempo-core/1/work-attribute
/rest/tempo-core/1/workloadscheme/
/rest/tempo-core/1/workloadscheme/move-members
/rest/tempo-core/2/holidayschemes/
/rest/tempo-core/2/holidayschemes/move-members
/rest/tempo-core/2/user/schedule/search
/rest/tempo-planning/1/allocation/
/rest/tempo-planning/1/permission/users/for-plan-permission
/rest/tempo-planning/1/plan
/rest/tempo-planning/1/plan-approval
/rest/tempo-planning/1/plan-approval/user/permission?
/rest/tempo-planning/1/plan/export
/rest/tempo-planning/1/plan/export/filter
/rest/tempo-planning/1/plan/remove/planLog/
/rest/tempo-planning/1/planSchedule/forTeam
/rest/tempo-planning/1/planSchedule/users
/rest/tempo-planning/1/plan/search
/rest/tempo-planning/1/userFilter
/rest/tempo-planning/2/capacity/export
/rest/tempo-planning/2/capacity/export/filter
/rest/tempo-planning/2/capacity/report
/rest/tempo-rest/1.0/accounts/json/billingKeyList/
/rest/tempo-rest/2.0/accounts/picker?query=
/rest/tempo-rest/2.0/activities/picker
/rest/tempo-rest/2.0/filters/picker
/rest/tempo-rest/2.0/issues/picker/
/rest/tempo-rest/2.0/issues/picker/internal?issueType=internal&actionType=logTime
/rest/tempo-rest/2.0/planning/supervisors?userKey=
/rest/tempo-rest/2.0/project/
/rest/tempo-rest/2.0/projects/picker
/rest/tempo-rest/2.0/scheduler-config
/rest/tempo-rest/2.0/search/picker
/rest/tempo-rest/2.0/user/getUser
/rest/tempo-rest/2.0/user/issues/
/rest/tempo-rest/2.0/users/picker/
/rest/tempo-teams/2/
/rest/tempo-teams/2/permissionGroups/myPermissions
/rest/tempo-teams/2/permissionGroups/team
/rest/tempo-teams/2/program
/rest/tempo-teams/2/role
/rest/tempo-teams/2/team/
/rest/tempo-teams/3/
/rest/tempo-teams/3/indexing
/rest/tempo-teams/3/locations
/rest/tempo-teams/3/locations/
/rest/tempo-teams/3/user-locations
/rest/tempo-teams/3/user-locations/
/rest/tempo-teams/3/user-locations/bulk
/rest/tempo-teams/3/user-schemes/users
/rest/tempo-teams/3/user-schemes/users/keys
/rest/tempo-teams/4/search/memberships
/rest/tempo-time-activities/1/issue/
/rest/tempo-timesheets/3/analytics/track
/rest/tempo-timesheets/3/period
/rest/tempo-timesheets/3/private/config/
/rest/tempo-timesheets/3/report/account/
/rest/tempo-timesheets/4/period
/rest/tempo-timesheets/4/period-configuration/
/rest/tempo-timesheets/4/private/days/search
/rest/tempo-timesheets/4/scheduler/grace-period/grant
/rest/tempo-timesheets/4/timesheet-approval
/rest/tempo-timesheets/4/timesheet-approval?
/rest/tempo-timesheets/4/timesheet-approval/approval-statuses?numberOfPeriods=
/rest/tempo-timesheets/4/timesheet-approval/log?
/rest/tempo-timesheets/4/timesheet-approval/user/
/rest/tempo-timesheets/4/worklogs/
/rest/tempo-timesheets/4/worklogs/export
/rest/tempo-timesheets/4/worklogs/export/filter
/rest/tempo-timesheets/5/period/approval?dateFrom=
/rest/tempo-timesheets/5/period/secondary?dateFrom=
There may come a day when you’re asked to create a large number of Confluence pages. Rather than doing it by hand, why not script it?
This Python script essentially does two things: it reads the CSV file, and it sends page creation requests to a Confluence server.
For each row in the CSV file, it assumes the page name should be the value in the first cell of the row. It then generates an HTML table that is sent as part of the page creation request.
Rather than generating HTML, this could be useful for setting up a large number of template pages, to be filled in by various departments. It could also run as a job, and automatically create a certain selection of pages every week or month, to store meeting notes or reports.
Please note that in order to connect to the Confluence server, you’ll need to generate a Personal Access Token.
import csv
import requests
import json
import html
import logging
# Initialize logging
logging.basicConfig(level=logging.ERROR)
api_url = 'https://<url>.com/rest/api/content/'
#What's the URL to your Confluence DC instance?
file_path = "<your CSV file path>"
#where is the file stored locally?
parent_page_id = "<your parent page ID>"
#Which page should the new pages be created under?
space_key = "<your space key>"
#What is the key of the Space in which these pages should be created?
headers = {
'Authorization': 'Bearer <PAT>',
'Content-Type': 'application/json',
'Accept': 'application/json',
}
#Create a set of headers to authenticate the request
#You'll need to create a Personal Access Token in your Confluence Instance, in order to connect to it
#Read the CSV file
with open(file_path, 'r') as file:
reader = csv.reader(file)
rows = list(reader)
header_row = rows[0]
# Assume the first row is the column headings
html_header_row = "<tr>"
for header in header_row:
html_header_row += f"<th>{header}</th>"
html_header_row += "</tr>"
#Generate the HTML table and its first row
for row in rows[1:]:
#Generate the HTML from the CSV rows, but skip the first row
page_title = row[0]
#We assume that the title of each page is stored in the first cell in each row
html_row = "<tr>"
for value in row:
html_row += f"<td>{value}</td>"
html_row += "</tr>"
#Add the HTML to the row object
# Generate the complete HTML table
html_table = f"<table>{html_header_row}{html_row}</table>"
#Add the complete list of rows to the table
page_data = {
"type": "page",
"title": page_title,
"ancestors": [{
"id": parent_page_id
}],
"space": {
"key": space_key
},
"body": {
"storage": {
"value": html_table,
"representation": "storage"
}
}
}
#Define the JSON that will be sent to create the Confluence page
response = requests.post(api_url, headers=headers, json=page_data)
#Send the POST request
if response.status_code == 200:
logging.info(f'Confluence page "{page_title}" created successfully.')
else:
logging.error(f'Failed to create Confluence page "{page_title}".')
logging.error('Status code: %s', response.status_code)
logging.error('Error: %s', response.text)
The amount of code required to fetch information from Confluence Cloud and bring it into Jira Cloud is a bit shocking. In a good way. Here’s the code:
import org.jsoup.*
def authString = "<authstring>"
def fieldConfigsResult = get("https://<url>.atlassian.net/wiki/rest/api/content/229377?expand=body.storage")
.header('Content-Type', 'application/json')
.header("Authorization", "Basic ${authString}")
.asObject(Map)
def storage = fieldConfigsResult.body.body.storage.value
return storage
In the end it’s all just REST. So long as you can authenticate, UNIREST allows us to pretty easily fetch information from other sites. If you’d like to learn more about authenticating against Jira Cloud, check out my post on the subject.
The request on the Atlassian Forums that caught my eye last night was a request to return all Jira Cloud attachments with a certain extension. Ordinarily it would be easy enough to cite ScriptRunner as the solution to this, but the user included data residency concerns in his initial post.
My solution to this was to write him a Python script to provide simple reporting on issues with certain extension types. Most anything that the rest API can accomplish can be accomplished with Python; you don’t HAVE to have ScriptRunner.
The hardest part of working with external scripts is figuring out the authorization scheme. Once you’ve got that figured out, the rest is just the same REST API interaction that you’d get with UniREST and ScriptRunner for cloud.
Then:
1. Generate an API token: https://id.atlassian.com/manage-profile/security/api-tokens
2. Go to https://www.base64encode.net/ (or figure out the Python module to do the encoding)
3. Base-64 encode a string in exactly this format: youratlassianlogin:APIToken. If your email is john@adaptamist.com and the API token you generated is:
ATATT3xFfGF0nH_KSeZZkb_WbwJgi131SCo9N-ztA3SAySIK5w3qo9hdrxhqHZAZvimLHxbMA7ZmeYRMMNR
Then the string you base-64 encode is:
john@adaptamist.com:ATATT3xFfGF0nH_KSeZZkb_WbwJgi131SCo9N-ztA3SAySIK5w3qo9hdrxhqHZAZvimLHxbMA7ZmeYRMMNR
Do not forget the colon between the two pieces.
The website will spit out a single string of encoded text that looks like this: a21jY2xlYW5AYWQQhcHRhdmlzdC5jb206QVRBVFQzeEZmR0YwbkhfS1NlWlprYl9XYndKZ2kxMzFTQ285Ti16dEEzU0F5U0lLNXczcW85a
4. Stick the encoded string into the header variable like so:
headers = { 'Authorization': 'Basic a21jY2xlYW5AYWQQhcHRhdmlzdC5jb206QVRBVFQzeEZmR0YwbkhfS1NlWlprYl9XYndKZ2kxMzFTQ285Ti16dEEzU0F5U0lLNXczcW85a', 'Content-Type': 'application/json', 'Accept': 'application/json', }Note that there’s nothing between the encoded string and the word “Basic”, and that both are treated as a single string.
Here’s an example of the script actually using this authorization scheme
import requests
max_results = 100
start_at = 0
attachment_arr = []
still_paginating = True
yourDomain = "<domain>"
headers = {
'Authorization': 'Basic <base-64 encoded string>',
'Content-Type': 'application/json',
'Accept': 'application/json',
}
# Define the global variables
while still_paginating:
# We need to paginate through the results
# We're iterating by 100 results at a time
response = requests.get(f"https://{yourDomain}.atlassian.net/rest/api/3/search?jql=project%20is%20not%20EMPTY&maxResults={max_results}&startAt={start_at}",
headers=headers)
#print(response.content)
issue_keys = response.json().get("issues")
# We start by returning all of the issues in the instance with a JQL search
for issue in issue_keys:
# Next, we're iterating through each result (issue) that was returned
issue_key = issue.get("key")
issue_response = requests.get(f"https://{yourDomain}.atlassian.net/rest/api/3/issue/{issue_key}",
headers=headers)
#print(issue_response.content)
issue_data = issue_response.json()
# We query the system for more information about the issue in question
attachments = issue_data.get("fields", {}).get("attachment", [])
for attachment in attachments:
# Specifically, we're after the ID of any attachment on the issue
attachment_id = attachment.get("id")
attachment_response = requests.get(f"https://{yourDomain}.atlassian.net/rest/api/3/attachment/{attachment_id}",
headers=headers)
#print(attachment_response.content)
attachment_data = attachment_response.json()
# Once we have the ID of the attachment, we can use that ID to get more information about the attachment
filename = attachment_data.get("filename")
if filename and (".csv" in filename or ".xlsx" in filename):
attachment_arr.append(f"Issue {issue_key} has an attachment: {filename}")
if len(issue_keys) < max_results:
still_paginating = False
# Finally, we check to see if we're still paginating
# If the total number of results is less than the total number of maximum possible results,
# we must be at the end of the line and can stop paginating by terminating the loop start_at += 100
start_at += 100
print(attachment_arr)
# Print the results
This script takes a list of custom field names, and searches each issue in the instance for places where that custom field has been used (i.e., where it has a value other than null).
In this way, we gain insight into the usage of custom fields within a Jira Cloud instance.
The script is interesting for a number of reasons. First, it’s another instance of us having to parse a rawBody response before we can make use of it. Second, it handles the need for pagination, which we’ve also talked about in recent posts.
My intent for this script is for it to serve as a basis for future custom field work in Jira Cloud. Namely, I’d like to be able to easily rename any field with “migrated” in the title.
import groovy.json.JsonSlurper
def stillPaginating = true
//Loop to paginate
def startAt = 0
//Start at a pagination value of 0
def maxResults = 50
//Increment pagination by 50
def issueIDs = []
def fieldNames = ["Actual end", "Actual start", "Change risk", "Epic Status"]
def customFieldMap = [: ]
//Get all the fields in the system
def fieldIDs = get("/rest/api/3/field")
.header('Content-Type', 'application/json')
.header('Accept', 'application/json')
.asBinary()
def inputStream = fieldIDs.rawBody
def scanner = new java.util.Scanner(inputStream).useDelimiter("\\A")
String rawBody = scanner.hasNext() ? scanner.next() : ""
def json = new JsonSlurper().parseText(rawBody)
//The response is a rawBody, so we need to convert to JSON
json.each{ item ->
//Iterate through the fields in the system
fieldNames.each{ fieldName ->
//Iterate through the user-supplied list of fields
if (item.name.toString() == fieldName) {
//If the name of any of the fields matches one of the user-supplied field names
customFieldMap[fieldName] = item.id.toString()
//Add that field name and ID to the map
}
}
}
while (stillPaginating) {
//Loop and paginate while this value is true
def response = get("/rest/api/2/search?jql=&startAt=${startAt}&maxResults=${maxResults}")
.header('Content-Type', 'application/json')
.header('Accept', 'application/json')
.asJson()
//Fetch the current batch of pagination values
response.body.object.issues.each{ issue ->
issue.fields.each{ field ->
customFieldMap.each{ key, value ->
try{
if(field[value] != null){
logger.warn("${issue.key} uses custom field ${key}. It has a value of ${field[value]}.")
}
}catch(Exception e){
logger.warn(e.toString())
}
}
}
}
if (issueIDs.size() < maxResults) {
stillPaginating = false
//Kill the pagination loop if we don't get a full batch of responses
}
startAt += 50
//Move to the next batch of pagination values
}
Here’s the truth: getting all of the filters in a Jira DC instance with ScriptRunner is awkward and fussy. There’s no method that simply returns all of the filters.
Instead, we need to first return all of the users in the system. And then we need to examine all of the filters that each of them owns, as each filter must have an owner. Here’s an example of some code that does that:
import com.atlassian.jira.user.ApplicationUser
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.bc.filter.SearchRequestService
import com.atlassian.jira.issue.search.SearchRequest
import com.atlassian.jira.bc.user.search.UserSearchParams
import com.atlassian.jira.bc.user.search.UserSearchService
SearchRequestService searchRequestService = ComponentAccessor.getComponent(SearchRequestService.class)
UserSearchService userSearchService = ComponentAccessor.getComponent(UserSearchService)
def sb = new StringBuffer()
UserSearchParams userSearchParams = new UserSearchParams.Builder()
.allowEmptyQuery(true)
.includeInactive(false)
.ignorePermissionCheck(true)
.build()
//Define the parameters of the query
//Iterate over each user's filters
userSearchService.findUsers("", userSearchParams).each{ApplicationUser filter_owner ->
try {
searchRequestService.getOwnedFilters(filter_owner).each{SearchRequest filter->
String jql = filter.getQuery().toString()
//for each fiilter, get JQL and check if it contains our string
sb.append("Found: ${filter.name}, ${jql}\n" + "<br>")
}
} catch (Exception e) {
//if filter is private
sb.append("Unable to get filters for ${filter_owner.displayName} due to ${e}")
}
}
return sb
Getting a list of filters on Jira Cloud is much simpler, as there’s a REST API that accomplishes this. If we call /rest/api/3/filter/search, a paginated list of filters in the system is returned. As a reminder, I wrote a little pagination primer. The results look like this:
{
"self": "https://<url>.atlassian.net/rest/api/3/filter/search?maxResults=50&startAt=0",
"maxResults": 50,
"startAt": 0,
"total": 2,
"isLast": true,
"values": [{
"expand": "description,owner,jql,viewUrl,searchUrl,favourite,favouritedCount,sharePermissions,editPermissions,isWritable,subscriptions",
"self": "https://<url>.atlassian.net/rest/api/3/filter/10002",
"id": "10002",
"name": "Filter For EP Board"
}, {
"expand": "description,owner,jql,viewUrl,searchUrl,favourite,favouritedCount,sharePermissions,editPermissions,isWritable,subscriptions",
"self": "https://<url>.atlassian.net/rest/api/3/filter/10001",
"id": "10001",
"name": "Filter for EP board"
}]
}
What you do with the results is up to you!
Let’s talk about meta-macros. That is, macros that examine other macros. I just made up the term, so don’t be concerned if you can’t find other examples on the internet.
If you wanted some insight into which pages in your Confluence Instance were using a specific macro, how would you find that information?
You could certainly check each page manually, but that sounds dreadful.
One option to get Macro information is this ScriptRunner script that I wrote, which examines the latest version of each page in each Space for references to the specified macro:
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.confluence.spaces.SpaceManager
import com.atlassian.confluence.pages.PageManager
import com.atlassian.confluence.pages.Page
def pageManager = ComponentLocator.getComponent(PageManager)
def spaceManager = ComponentLocator.getComponent(SpaceManager)
def spaces = spaceManager.getAllSpaces()
def macroRef = 'ac:name="info"'
spaces.each {
spaceObj ->
def pages = pageManager.getPages(spaceObj, false)
pages.each {
page ->
if (page.getBodyContent().properties.toString().contains(macroRef) && page.version == page.latestVersion.version) {
log.warn("'${page.title}' (version ${page.version}) is the latest version of the page, and contains the target macro")
}
}
}
But what if you wanted MORE information? What if you wanted to know every macro running on every page in the system, and you didn’t have ScriptRunner to do it for you? In that case, we need a macro that catalogues other macros.
Macros are useful because they can be purely JavaScript, and can access the Confluence REST API. Therefor, they can do anything that ScriptRunner could have done with the REST API. It’s just a lot less convenient.
Even in a Confluence instance that doesn’t have ScriptRunner, there is generally the opportunity to add a user macro to the system. You’ll need to be a Confluence Admin to do so, but I assume most people reading my blog will already have that access. I set out to write a Macro that would provide me with information about every macro running on every page in Confluence, and present it in an easily-readable format. Behold, the results:
Each page name is a link to the page, each Space name is a link to the space, and if you click the name of the macro you get a little info box:
Neat!
Here’s the full macro code. I am new to JavaScript, and I tried my best, therefor nobody can criticize me:
## @noparams
<script type = "text/javascript"> //Declare global variables //Declare global variables
let macroDetails = [];
var pageInfoArr = [];
var tableGenerated = false;
//Start manin function loop
AJS.toInit(function() {
//Declare CSS styles to be applied
$('<style>').text(".modal-body { padding: 10px;}"+
"table {border-collapse: collapse; width: 60%; color: #333; font-family: Arial, sans-serif; font-size: 14px; text-align: left; border-radius: 10px; overflow: hidden; box-shadow: 0 0 20px rgba(0, 0, 0, 0.1); margin: auto; margin-top: 50px; margin-bottom: 50px;} " +
"th {background-color: #1E90FF; color: #fff; font-weight: bold; padding: 10px; text-transform: uppercase; letter-spacing: 1px; border-top: 1px solid #fff; border-bottom: 1px solid #ccc;} " +
"td {background-color: #fff; padding: 20px; border-bottom: 1px solid #ccc; font-weight: bold;} " +
"table tr:nth-child(even) td {background-color: #f2f2f2;}" + ".modal {display: none;position: fixed;top: 0;left: 0;right: 0;bottom: 0;background-color: rgba(0, 0, 0, 0.5);z-index: 9999; /* Ensure the dialog is on top of other elements */}"+
".modal-message {text-align: left;font-size: 16px;line-height: 1.6;}"+
".modal-header {padding: 10px;background-color: #1E90FF;color: #fff;font-weight: bold;text-transform: uppercase;letter-spacing: 1px;border-top-left-radius: 5px;border-top-right-radius: 5px;}"+
".modal-content {display: flex;flex-direction: column;justify-content: center;width: 30%;background-color: #fff;border-radius: 5px;box-shadow: 0 2px 4px rgba(0, 0, 0, 0.3);margin: auto;}"
).appendTo(document.head);
//Query Confluence for all content objects that are a page
jQuery.ajax({
url: "/rest/api/content?type=page&start=0&limit=99999",
type: "GET",
contentType: "application/json",
//Upon successfully querying for the pages in Confluence
success: function(data) {
//Iterate through all of the pages
data.results.forEach(function(page) {
//For each page, query the REST API for more details about it
jQuery.ajax({
url: "/rest/api/content/" + page.id + "?expand=body.storage.value,version,space",
type: "GET",
contentType: "application/json",
//If the page details query was successful, parse the page for references to macros
success: function(data) {
var html = data.body.storage.value;
var re = /<ac:structured-macro\s+ac:name="([^"]+)"/g;
//When a match is found, add the macro to the array
var match;
while ((match = re.exec(html)) !== null) {
var name = match[1];
console.log("Found structured-macro with name: " + name);
pageInfoArr.push({
name: data.title,
space: data.space.name,
macro: name
});
}
//Check to ensure that the array has at least one value in it
if (pageInfoArr.length > 0) {
//If the array has a value, check to see if the table has been generated
//If it hasn't, create the table and add the headers
if (!tableGenerated) {
let table = document.createElement("table");
let row = table.insertRow();
let headers = ["Page Name", "Page Space", "Macro Name (click for info)"];
for (let i = 0; i < headers.length; i++) {
let header = document.createElement("th");
let text = document.createTextNode(headers[i]);
header.appendChild(text);
row.appendChild(header);
}
document.getElementById("main-content").appendChild(table);
tableGenerated = true;
}
//If the table HAS been generated, append the information to it
if (tableGenerated) {
let table = document.getElementsByTagName("table")[0];
let row = table.insertRow();
let nameCell = row.insertCell();
let spaceCell = row.insertCell();
let macroCell = row.insertCell();
//Get the name of the macro
var macroName = pageInfoArr[pageInfoArr.length - 1].macro
//Set the ID of the modal info box
let modalID = 'myModal' + pageInfoArr.length; // unique ID for each modal
//Log some information about what's happening
console.log(`modal #: ${modalID} macroname: ${macroName}`);
//Add the page name and space name to the table
nameCell.innerHTML = `<a href="/pages/viewpage.action?pageId=${data.id}" target="_blank">${data.title}</a>`;
spaceCell.innerHTML = `<a href="/display/${data.space.key}" target="_blank">${data.space.name}</a>`;
//Now we're querying the REST API for more information about the macro in question
jQuery.ajax({
url: `/plugins/macrobrowser/browse-macros-details.action?id=${macroName.toString()}`,
type: "GET",
contentType: "application/json",
//If we successfully returned information about the macro, confirm that the information we need is available
success: function(macroData) {
let description = macroData.details ? macroData.details.description : 'No description available';
macroDetails.push({details: description});
//Populate the macro info cell with information
//This includes the information that will be displayed in the modal info box
macroCell.innerHTML = '<span class="word" onclick="showDialog(\'' + modalID + '\')">' + macroName + '</span> <div id="'+modalID+'" class="modal"> <div class="modal-content"><div class="modal-header">Macro Details</div><div class="modal-body"><p class="modal-message">'+
`Macro Name: <i>${macroData.details.macroName}</i><br>
Plugin Key: <i>${macroData.details.pluginKey}</i><br>
Description: <i>${macroData.details.description}</i><br>`
+'</p></div></div></div>';
},//Handle errors for GET macro details request
error: function(xhr, status, error) {
console.log("Macro detail request failed. Error message: " + error);
}
});//End GET macro details request
}//End pageArr.length check loo;
}//End success loop for GET page details request
},//Handle errors for GET page details request
error: function(xhr, status, error) {
console.log("GET request for page object failed. Error message: " + error);
}
});//End for-each page loop
});
},//End success loop of first ajax query, the GET content request
//Handle errors in the first ajax query, the GET content request
error: function(xhr, status, error) {
console.log("Content GET request failed. Error message: " + error);
}
});//End first ajax query, the GET content request
});//End main toInit function loop
//Declare the functions we use to view, hide, and access the modal info boxes
function showDialog(modalID) {
var modal = document.getElementById(modalID);
modal.style.display = 'flex';
}
function hideDialog() {
var modal = document.getElementById('myModal');
modal.style.display = 'none';
}
window.onclick = function(event) {
var modals = document.getElementsByClassName('modal');
Array.from(modals).forEach(function(modal) {
if (event.target === modal) {
modal.style.display = 'none';
}
});
};
</script>
{
"expand": "schema,names",
"startAt": 0,
"maxResults": 50,
"total": 35,
{
"expand": "schema,names",
"startAt": 0,
"maxResults": 100,
"total": 35,
def maxResults = 100
def startAt = 0
def stillPaginating = true
//Define the global variables
while (stillPaginating == true){
//We need to paginate through the results
//We're iterating by 100 results at a time
def getIssueKeys = get("/rest/api/3/search?jql=project%20is%20not%20EMPTY&maxResults=${maxResults}&startAt=${startAt}")
.header('Content-Type', 'application/json')
.asObject(Map)
//We start by returning all of the issues in the instance with a JQL search
getIssueKeys.body.issues.each{ issueKey ->
//do something with the issues
if(getIssueKeys.body.total < maxResults){
stillPaginating = false;
}
//Finally, we check to see if we're still paginating
//If the total number of results is less than the total number of maximum possible results,
//We must be at the end of the line and can stop paginating by terminating the loop
startAt += 100
}
This script searches for Jira filters by name, then by filter ID, and then updates the permissions set applied to them.
It’s useful for updating a bulk list of filter names, but really it’s an exercise in working with filters and the search service.
The script iterates through an array of filter names. For each filter name, it uses the searchRequestManager to find that filter object. It then uses the filter ID of the object to return a filter object. This seems redundant, but the object returned by the searchRequestManager isn’t actually a filter. It’s information about the filter. When we search for the filter by name using searchRequestManager , it will return any filter that has that name. For that reason, we must treat the results like an array, and iterate through them. For each filter ID that is returned by the name search, we use the searchRequestManager to search for the filter object. Once we’ve got a filter object, we apply the new permissions to it and commit the change with updateFilter().
import com.atlassian.jira.bc.JiraServiceContextImpl
import com.atlassian.jira.bc.filter.SearchRequestService
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.sharing.SharePermissionImpl
import com.atlassian.jira.sharing.SharedEntity
import com.atlassian.jira.sharing.type.ShareType
import com.atlassian.jira.issue.search.*
def searchRequestService = ComponentAccessor.getComponent(SearchRequestService)
def currentUser = ComponentAccessor.jiraAuthenticationContext.loggedInUser
def searchRequestManager = ComponentAccessor.getComponent(SearchRequestManager)
final authenticatedUserSharePerms = new SharedEntity.SharePermissions([new SharePermissionImpl(null, ShareType.Name.GROUP, "jira-users", null)] as Set)
//Create the permissions that will be applied to the filter
//SharedEntity is a class for any entity that can be shared or favorited
//https://docs.atlassian.com/software/jira/docs/api/7.2.0/com/atlassian/jira/sharing/SharedEntity.html
//We're definine a new set of share permissions:
//https://docs.atlassian.com/software/jira/docs/api/7.6.1/com/atlassian/jira/sharing/SharePermissionImpl.html
def filters = []
filters.each{filterName ->
def filterIDs = searchRequestManager.findByNameIgnoreCase(filterName)
//Find the filters using their names
//Sometimes more than one filter has the same name, so we treat the results like an array:
filterIDs.each{filterID ->
def filter = searchRequestManager.getSearchRequestById(filterID.id)
//For each result returned when we search for the filter name
filter.setPermissions(authenticatedUserSharePerms)
//Set the updated permissions on the filter
def filterUpdateContext = new JiraServiceContextImpl(currentUser)
//Define the context in which this update will be performed
//If we try to run the update as the filter.owner, it doesn't work with filters that are exposed to anyone on the web
searchRequestService.updateFilter(filterUpdateContext, filter)
//Actually perform the update, using the context and running against the filter
log.warn("Updating permissions for filter: ${filter.name}")
if (filterUpdateContext.errorCollection.hasAnyErrors()) {
log.warn("Error updating filter - possibly owner has been deleted. Just delete the filter. " + filterUpdateContext.errorCollection)
}
}
}
Well, it finally happened. I finally had to start learning JavaScript.
It’s actually not that bad, I probably should have learned a while ago. My use case for it is writing Confluence Macros and plugins for both Confluence and Jira. I started with the plugins, for simplicity’s sake. My inspiration came from a post on the Atlassian Community Forums. Someone had requested a way to essentially mirror the setup of a macro. But they wanted to mirror the most recent child page, of a parent page. I think that without pretty strong knowledge of Confluence and the REST API, I’d have struggled to complete this. It enough work to learn JavaScript’s basic tenets as I went.
Okay so what do we actually need the script to do? We need it to:
These are the three high-level functions that the macro needs to accomplish.
Figuring out the most recently updated child page wasn’t hard. You can make a call to baseURL + pageID + “/child/page?limit=1000&expand=history.lastUpdated. This returns a list of the most recently updated child pages for the given parent. The base URL is easy, since it’s simply the instance that we’re on. The pageID is simply the page ID of the parent page, which we can return by calling AJS.params.pageId.
So we have a URL that’ll give us a list of pages. If we sort that page by date and return the most recent one, we have the page ID of the most recently updated child page.
We can then use that ID to return the page object of the child page itself, by calling /pages/viewpage.action?pageId=${childPageID}. Here’s what the function looks like:const baseURL = "/rest/api/content/";
const childrenURL = baseURL + pageID + "/child/page?limit=1000&expand=history.lastUpdated";
//Get the API endpoint with which we will fetch the most recently update child pages
fetch(childrenURL)
.then(response => response.json())
.then(data => {
const sortedChildren = data.results.sort((a, b) => {
const aDate = new Date(a.history.lastUpdated.when);
const bDate = new Date(b.history.lastUpdated.when);
return bDate - aDate;
});
//Return and sort the most recently update child pages
const mostRecentChildID = sortedChildren[0].id;
console.log("The ID of the most recently updated child page is: " + mostRecentChildID);
}).catch(error => console.log("Error fetching child page ID:", error));
The variable we end up with, mostRecentChildID, is the value we need to work with next. But there’s a problem: fetch doesn’t return a value. It returns a promise. So let’s explore that a little bit.
Briefly, a promise is the result of an asynchronous JavaScript function. In other words, it’s basically a placeholder. The function will run in the background when it’s called, and the rest of the script will continue processing. When you’re ready, you can call upon the results of that promise. So what’s the problem?
The problem is that we can’t simply refer to the variable that we declared within the fetch function. It’s not available, because the promise hasn’t been fulfilled yet. If we use it within the fetch function, the function knows to wait for the promise to be resolved or rejected.
So that’s option 1. We can just do everything inside the fetch function, treating it like a giant closure. Option 2 is to use a callback function, wherein we pass the value of the resolved promise to another function. In this way the fetch still knows that we want to do something with the results, but the value is made available outside of the context of the promise. Example:<script type="text/javascript">
//let newVar;
const p = fetch('/rest/api/content')
.then(response => {
if (!response.ok) {
throw new Error('Failed to fetch data');
}
return response.json();
})
.then(data => {
newVar = data;
myFunction(newVar); // Call the function that needs to access the value of newVar
})
.catch(error => {
console.error("Encountered an error fetching data:", error);
});
function myFunction(data) {
console.log("This is the value of newVar: ", data);
}
</script>
This successfully passes the value of newVar outside of the fetch to another function. Worth noting is that I left a “let” statement commented out at the top so that we could touch on that possibility: simply declaring the variable outside of the promise does NOT fix the issue. We must either pass it to a function or use it within the confines of the promise.
With all that in mind, the actual macro isn’t terribly complicated. Let’s look at the whole thing.
So here’s the macro. It has three nested fetch statements.
The first fetch statement grabs the ID of the most recently updated child page.
The second-level fetch statement uses that ID to grab the settings of the first excerpt-extract macro on the child page.
The third-level fetch statement uses those macro settings to update the parent page.
The result is a macro that mirrors, on a parent page, the macro setup of the most recently update child page. Neat!
## @noparams
<script type="text/javascript">
let pageID = AJS.params.pageId
//Get the ID of the current (parent) page
const baseURL = "/rest/api/content/";
const childrenURL = baseURL + pageID + "/child/page?limit=1000&expand=history.lastUpdated";
//Get the API endpoint with which we will fetch the most recently update child pages
fetch(childrenURL)
.then(response => response.json())
.then(data => {
const sortedChildren = data.results.sort((a, b) => {
const aDate = new Date(a.history.lastUpdated.when);
const bDate = new Date(b.history.lastUpdated.when);
return bDate - aDate;
});
//Return and sort the most recently update child pages
const mostRecentChildID = sortedChildren[0].id;
console.log("The ID of the most recently updated child page is: " + mostRecentChildID);
//Turn the ID of the most recently update child page into a variable
//Start second-level loop
const url = `/pages/viewpage.action?pageId=` + mostRecentChildID;
//Define the URL of the target child page
fetch(url)
.then(response => response.text())
.then(html => {
const div = document.createElement('div');
div.innerHTML = html;
const macroElements = Array.from(div.querySelectorAll(".conf-macro"));
const matchtestMacros = macroElements.filter(macro => macro.getAttribute("data-macro-name") === "excerpt-include");
const macroData = [];
matchtestMacros.forEach(macro => {
const divs = macro.querySelectorAll("div");
const res = divs[0].innerHTML.replace(/<\/?b>/g, "");
macroData.push(res);
});
const ChildMacroSource = macroData[0];
//Get the first excerpt-include macro from the page
//We're assuming that we're only interested in the first result
//Start third-level loop
const pageURL = baseURL + pageID + '?expand=body.storage,version';
// Retrieve the page content
fetch(pageURL)
.then(response => response.json())
.then(data => {
const pageBody = data.body.storage.value;
// Replace "SourcePage" with "NewPage" in the page body
const modifiedPageBody = pageBody.replace(/ri:content-title="([^"]*)"/g, `ri:content-title="${ChildMacroSource}"`);
//Replace the excerpt-include source with the source from the child page
// Update the page with the modified content
const updateURL = baseURL + pageID;
const bodyData = JSON.stringify({
"id": pageID,
"type": "page",
"title": data.title,
"version": {
"number": data.version.number + 1,
"minorEdit": false
},
"body": {
"storage": {
"value": modifiedPageBody,
"representation": "storage"
}
}
});
fetch(updateURL, {
method: 'PUT',
headers: {
'Content-Type': 'application/json'
},
body: bodyData
})
.then(response => {
console.log(response);
if (!response.ok) {
throw new Error('Failed to update page content: ' + response);
}
alert('Page content updated successfully');
})
.catch(error => {
alert('Error: ' + error.message);
});
})
.catch(error => alert("Encoutered an error updating the page content: " + error));
//End third-level loop
}).catch(error => console.error("Encoutered an error getting the excerpt-include source from the child page: " + error));
//Second level "then" loop end
}).catch(error => console.log("Error fetching child page ID:", error));
//First level "then" loop end
</script>
As is often the case, the point of this blog isn’t so much to explain how to do something complicated. The point is that I’m trying to explain something simple that should be easy to find an answer for, but was not. In this case my question was “what on earth is a UserTemplate (User Template) in Confluence”?
On the surface, it seems like creating a new user in Confluence should be a pretty straightforward process. There’s a UserAccessor class, and that class has a createUser() method. However, the expected inputs to that method are a User Template object and a Credentials object. From the class documentation:
ConfluenceUser createUser(com.atlassian.user.User userTemplate, com.atlassian.user.security.password.Credential password)
The import required to work with Credential is spelled out for us, but the userTemplate is a different story. There’s virtually no documentation on what that means, and no amount of Googling “Confluence User Template”, “UserTemplate”, “Confluence Create User Template” will actually tell you what to do. Part of the issue is that “template” means several different things in the context of Confluence, so that muddies the waters.
Let’s cut to the chase. Here’s the code that I eventually came up with:import com.atlassian.confluence.api.model.people.User
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.confluence.user.UserAccessor
import com.atlassian.user.impl.DefaultUser
import com.atlassian.user.security.password.Credential
def userAccessor = ComponentLocator.getComponent(UserAccessor)
def user = userAccessor.createUser(new DefaultUser("kenm", "Ken McClean", "ken@email.com"), Credential.unencrypted("password"));
return user
The UserTemplate that we’re working with is the DefaultUser object. We import it using the com.atlassian.user.impl
package, of which Default User is the only class. So it’s not like the package is going to tell us how to work with other user templates.
When we call the createUser() method, we define the UserTemplate at the same time that we declare it. So we start with a base of the DefaultUser, and supply the necessary values to create a user.
You could also create a user with UserService.CreateUserRequest, but then you’ll need to figure out the validation. Not difficult, just a different approach.
In a bid to contribute more to the Atlassian Community, I took a look at the most recent requests and questions on the Forums. One that caught my eye was a request for a Confluence Macro that would:
“…display on the restricted page exactly who has access (including a breakout of all group members, not just the group name) to create transparency and build confidence in the selected user group that they are posting in the appropriately restricted area.”
I’d never created a Confluence Macro before, and this seemed like a challenge I could meet.
Please note that this isn’t a how-to on creating Macros, but really just an accounting of my experience learning the tool.
The first thing I did was check to see what Atlassian has to say on the subject. Confluence Macros are written in Apache Velocity, which is quite different from the Groovy that I’m used to working with.
All of the functional lines in Velocity start with a #, which makes a Velocity script look like one big page of commented-out code. The truth is that Velocity is very old and pretty clunky. The last news update to the project was over two years ago. All but one of the links to development tools in the “Dev Tools” section of the Velocity website are dead links. Is it a dead platform? Maybe! In the meantime, it’s what we have to work with.
The most useful resource I found was this document on available Confluence objects in Velocity. The actual syntax was pretty basic: functional lines start with #, and variables are preceded by a dollar $ign.
A basic example of a Macro would be something like this:
## @noparams
#set($permissions = $content.permissions)
#foreach($permission in $permissions)
$permission<br>
#end
It feels like a mix between writing in BASIC and PowerShell. As you can see above, we set a variable to be the permissions on the page, and then looped through the page to write out each permission. $content is the object that stores the attributes of the current page. If I wanted to get the ID of the current page, I’d reference $content.id and so on.
$content is an implementation of the ContentEntityObject class in Confluence. The documentation for this class is here. Were I to look into making further use of this class, I’d start by looking at the methods outlined in the documentation.
Okay so here’s the actual Macro that I came up with. It prints the names of individual users with access to the page, as well as the groups with access to the page and the members of those groups.
## @noparams
#set($permissions = $content.permissions)
#set($userNamesArray = [])
#set($groupNamesArray = [])
#foreach ($permission in $permissions)
#if($permission.userSubject && $permission.userSubject.name && !$userNamesArray.contains($permission.userSubject.name))
#set($unused = $userNamesArray.add($permission.userSubject.name))
#end
#if(!$permission.userSubject.name)
#if(!$groupNamesArray.contains($permission.groupName))
#set($unused = $groupNamesArray.add($permission.groupName))
#end
#end
#end
<h3>Users With Page Access</h3><br>
#foreach ($userName in $userNamesArray)
$userName<br>
#end
#foreach ($groupName in $groupNamesArray)
<h3>Groups With Page Access:<br>
<b> $groupName</b></h3><br>
#set($group = $userAccessor.getGroup($groupName))
#set($members = $userAccessor.getMemberNames($group))
Group Members:<br>
#foreach($member in $members)
$member <br>
#end
#end
Most of this is pretty straightforward. You may be wondering why we added values to the two arrays and set them to be the value of $unused at the same time. This is because we need to pipe that value into an unused string. If we don’t, Velocity prints the results of the boolean check to the page. That is, without this extra step, it prints “true true true true” to the macro body on the Confluence page.
There are a number of references to “MacroManager” in the Confluence API documentation, but none of the implementations seemed to work for me.
For that reason, our best bet for checking on Macro usage is to examine the body content of each page, and look for a specific reference to the macro in question.
We also need to check that the page in question is the latest version of the page. Otherwise the script checks all versions of all pages.
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.confluence.spaces.SpaceManager
import com.atlassian.confluence.pages.PageManager
import com.atlassian.confluence.pages.Page
def pageManager = ComponentLocator.getComponent(PageManager)
def spaceManager = ComponentLocator.getComponent(SpaceManager)
def spaces = spaceManager.getAllSpaces()
def macroRef = 'ac:name="info"'
//Replace "info" with the name of the macro you want to assess
spaces.each{ spaceObj ->
//Get all Spaces in the instance
def pages = pageManager.getPages(spaceObj, false)
pages.each{ page ->
//Get all pages in the instances
if (page.getBodyContent().properties.toString().contains(macroRef) && page.version == page.latestVersion.version) {
//Check if the page contains the macro, then check to see if it's the most current version of the page
log.warn("'${page.title}' (version ${page.version}) is the latest version of the page, and contains the target macro")
}
}
}
//Example Listener for Jira Cloud - Create Bug and Epic Automatically
//Author: Ken McClean / kmcclean@daptavist.com
//Script functions:
// - Script runs as a listener
// - On issue create, script creates a new epic
// - It also creates a new bug
// - The script then sets the epic link of the original issue and the new bug to be the newly created epic
//References:
//https://developer.atlassian.com/cloud/jira/platform/rest/v3/api-group-issues/#api-rest-api-3-issue-post
//https://library.adaptavist.com/entity/create-subtasks-when-issue-created
//BEGIN DECLARATIONS:
final bugTypeID = "10008"
final epicTypeID = "10000"
final epicNameField = "customfield_10011"
final epicLinkField = "customfield_10014"
//Set the constants that we'll use to refer to components within the system
//You'd have to find these values within your own Jira instance, and update the script accordingly
def issueKey = issue.key
//def issueKey = "ABC-8"
//Get the key of the newly created issue
//If we wanted to test against a specific issue, we'd convert to using something like "def issueKey = "ABC-8"
//*************************
//BEGIN MAIN SCRIPT FUNCTIONS:
//Fetch the new issue as an object
def issueResp = get("/rest/api/2/issue/${issueKey}").asObject(Map)
assert issueResp.status == 200
def issue = issueResp.body as Map
def fields = issue.fields as Map
//Get the issue fields
//Create a new epic under the same project as the new issue
logger.warn("Creating a new epic. Result:")
def createEpic = post('/rest/api/2/issue')
.header('Content-Type', 'application/json')
.body(
fields: [
"${epicNameField}": "Epic For ${issueKey}",
project: [id: fields.project.id],
issuetype: [
id: "${epicTypeID}"
],
summary: "Epic - Related to ${issueKey}"
])
.asObject(Map)
//Get and validate the newly created epic
def newEpicIssue = createEpic.body
//If the epic was created successfully, return a success message
if (createEpic.status >= 200 && createEpic.status < 300 && newEpicIssue && newEpicIssue.key != null) {
logger.info("Success - Epic Created with the key of ${createEpic.body.key}")
} else {
logger.error("${createEpic.status}: ${createEpic.body}")
}
logger.warn("Creating a new bug. Result:")
//Create a new bug, and set the epic link of the bug to the previously created epic
def createBug = post('/rest/api/2/issue')
.header('Content-Type', 'application/json')
.body(
fields: [
"${epicLinkField}": "${createEpic.body.key}",
project: [id: fields.project.id],
issuetype: [
id: bugTypeID
],
summary: "Epic - Related to ${issueKey}"
])
.asObject(Map)
//Get and validate the newly created bug
def newBugIssue = createBug.body
//If the bug was created successfully, return a success message
if (createBug.status >= 200 && createBug.status < 300 && newBugIssue && newBugIssue.key != null) {
logger.info("Success - Bug Task Created with the key of ${createBug.body.key}")
} else {
logger.error("${createBug.status}: ${createBug.body}")
}
//Set the epic link of the new issue to be the key of the newly created epic
logger.warn("Setting the epic link of the original issue to be the key of the new epic. Result:")
def setEpicLink = put("/rest/api/2/issue/${issueKey}")
.header('Content-Type', 'application/json')
.body(
fields: [
"${epicLinkField}": "${createEpic.body.key}",
])
.asObject(Map)
//If the epic was created successfully, return a success message
if (setEpicLink.status >= 200 && setEpicLink.status < 300 && setEpicLink.body && setEpicLink.body.key != null) {
logger.info('Success - Epic Link created for ${issueKey}')
} else {
logger.error("${setEpicLink.status}: ${setEpicLink.body}")
}
One of the challenges that Jira admins face is monitoring the health of their Jira instance. While there are some built-in tools for doing this, it’s useful to know how to perform routine maintenance using ScriptRunner. One such routine maintenance task is the monitoring of Jira Project sizes. There’s no direct method or way of doing this; instead we get all of the issues for a given project, and sum up the size of the attachments on each issue. In this way, we get an idea of which Jira Projects are becoming unwieldy.
The code required to perform this calculation isn’t complicated. However, if the script runs too long it’ll simply time out. This is especially true if you’re calculating the size of multiple Projects. Here’s the code to calculate the size of a single project:
import org.ofbiz.core.entity.GenericValue;
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.Issue;
import com.atlassian.jira.issue.IssueManager;
import com.atlassian.jira.project.Project;
import com.atlassian.jira.project.ProjectManager
def totalSize = 0
def attachmentManager = ComponentAccessor.getAttachmentManager()
ProjectManager projectManager = ComponentAccessor.getProjectManager()
def projectName = "<projectName>"
Project proj = projectManager.getProjectByCurrentKey(projectName)
IssueManager issueManager = ComponentAccessor.getIssueManager()
for (GenericValue issueValue: issueManager.getProjectIssues(proj.genericValue)) {
Issue issue = issueManager.getIssueObject(issueValue.id)
attachmentManager.getAttachments(issue).each {
attachment ->
totalSize += attachment.filesize
}
}
log.warn("Total size of attachments for ${proj.name} is ${totalSize / 1024} kb")
This code works, but makes no provisions for especially large projects that would cause the script to time out. This would easily be exacerbated if more than one project were being analyzed. So how do we solve this?
As we’ve previously discussed, Jira has a number of methods that allow for asynchronous operations. The code below performs the same basic operation as the previous script, but it submits each operation to the executor. This allows for multiple projects to be analyzed simultaneously. Additionally, by changing the value of the awaitTermination method, we can actually change how long the script will run before it times out.
import org.ofbiz.core.entity.GenericValue;
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.Issue;
import com.atlassian.jira.issue.IssueManager;
import com.atlassian.jira.project.Project;
import com.atlassian.jira.project.ProjectManager
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
def attachmentManager = ComponentAccessor.getAttachmentManager()
ProjectManager projectManager = ComponentAccessor.getProjectManager()
def projects = [<projects>]
def totalSize = 0
log.warn("There should be ${projects.size()} results when this fininshes.")
//Log the number of projects in the list, so you can easily tell if it finished or simply timed out
// Create a fixed thread pool with a number of threads equal to the size of the list of projects
ExecutorService executor = Executors.newFixedThreadPool(projects.size());
// Create a list of tasks to execute
List < Runnable > tasks = []
projects.each { project ->
//Iterate through the list of projects
Project proj = projectManager.getProjectByCurrentKey(project)
IssueManager issueManager = ComponentAccessor.getIssueManager()
tasks.add({
for (GenericValue issueValue: issueManager.getProjectIssues(proj.genericValue)) {
Issue issue = issueManager.getIssueObject(issueValue.id)
attachmentManager.getAttachments(issue).each {
attachment ->
totalSize += attachment.filesize
}
}
log.warn("Total size of attachments for ${proj.name} is ${totalSize / 1024} kb")
})
}
// Submit the list of Tasks to the executor
tasks.each {
task ->
executor.submit(task)
}
//Stop the Executor, and wait for all of the running tasks to finish
executor.shutdown()
executor.awaitTermination(10, TimeUnit.MINUTES)
Well you could certainly do it by hand. That’s an option. Or you could write a little script and us an API endpoint that I’ve only just discovered.
The script below fetches all of the projects in the system that use a notification scheme. We then filter out only the ones we want to adjust, and update each of those projects to use the target notification scheme.
This script is a great example of the kind of thing that ScriptRunner excels at, and it’s a great script to use to start learning the REST API.import groovy.json.JsonSlurper
// Notification scheme ID to search for
def searchSchemeId = "10000"
// Notification scheme ID to use for update
def updateSchemeId = 10002
def response = get("/rest/api/3/notificationscheme/project")
.header('Content-Type', 'application/json')
.asJson()
// Parse the JSON response
def jsonSlurper = new JsonSlurper()
def json = jsonSlurper.parseText(response.getBody().toString())
// Access the nodes in the JSON
json.values.each {
project ->
logger.warn(project.toString())
//We need to account for -1, which is always a value that the system stores as a project ID for some reason
if ((project.notificationSchemeId == searchSchemeId) && (project.projectId != "-1")) {
def update = put("/rest/api/2/project/" + project.projectId.toString())
.header('Content-Type', 'application/json')
.body(["notificationScheme": updateSchemeId])
.asJson()
if (update.status != 200) {
logger.warn("ERROR: " + update.body)
} else {
logger.warn("SUCCESS: updated project with ID" + project.projectId.toString())
}
}
}
I admit, this one is a very specific use case. But I had a reason to create the script, so maybe someone will find it useful. I also got to use the .collect method in a useful way!
This script identifies all of the pages that are linked from a target page. It then compares that list of links to a list of all the pages in the Space.
By doing this, it identifies any pages in the Space that aren’t linked on the target page. This could be useful if you had a wiki or something, and wanted to know which pages weren’t linked on the front page.
One interesting thing I discovered while doing this is the outgoingLinks method of the page class. Instead of having to use regex to find the URLs on a page, I simply had to call this method and all of the urls were returned for me.
import com.atlassian.confluence.pages.PageManager
import com.atlassian.confluence.spaces.SpaceManager
import com.atlassian.sal.api.component.ComponentLocator
def pageManager = ComponentLocator.getComponent(PageManager)
def spaceManager = ComponentLocator.getComponent(SpaceManager)
def targetSpace = spaceManager.getSpace("<Space Key>")
def spacePages = pageManager.getPages(targetSpace, true)
def targetPage = pageManager.getPage(<page ID as Long>)
def outgoingLinks = targetPage.outgoingLinks.collect { link ->
link.toString().minus("ds:")
}
spacePages.each {
page ->
if (!outgoingLinks.contains(page.title)) {
log.warn("${page.title} is not linked to the front page")
}
}
One of the problems we encounter with migrating large Jira instances is that when it comes to JCMA, you either have to add all of the Advanced Roadmaps plans or none of them. There’s no facility for selectively adding Roadmaps plans.
One such migration involved moving a subset or portion of the instance data, rather than the whole thing. Though the instance had 1400+ Roadmaps plans, we only needed to migrate about 100 of them.
The solution we came up with was:
Naturally, none of us fancied deleting 1300 Roadmaps plans by hand, especially if we had to do it more than once during the course of testing. So we scripted it.
The actual deleting of an Advanced Roadmaps plan is pretty simple. To delete a plan, just send a DELETE request to the plan’s API endpoint, /rest/jpo/1.0/plans/<planID>. So long as you have a list of plan names or IDs that you want to keep, you can tell the script to delete any plans that don’t match an entry in that list.
The script below does a few things:
import com.atlassian.sal.api.net.Request
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.config.properties.APKeys
import com.atlassian.sal.api.net.Response
import com.atlassian.sal.api.net.ResponseException
import com.atlassian.sal.api.net.ReturningResponseHandler
import com.atlassian.sal.api.net.TrustedRequest
import com.atlassian.sal.api.net.TrustedRequestFactory
import groovy.json.JsonBuilder
import groovy.json.JsonSlurper
import groovyx.net.http.ContentType
import groovyx.net.http.URIBuilder
import java.net.URL;
def toKeep = ["1111", "22222", "33333"]
// Define the array of plan names that should be kept
def currentUser = ComponentAccessor.jiraAuthenticationContext.loggedInUser
def baseUrl = ComponentAccessor.applicationProperties.getString(APKeys.JIRA_BASEURL)
def trustedRequestFactory = ComponentAccessor.getOSGiComponentInstanceOfType(TrustedRequestFactory)
def endPointPath = '/rest/jpo/1.0/programs/list'
def url = baseUrl + endPointPath
// Reusable method for sending HTTP requests and parsing the JSON response
def sendRequest(url, method, headers=[:], body=null) {
def currentUser = ComponentAccessor.jiraAuthenticationContext.loggedInUser
def baseUrl = ComponentAccessor.applicationProperties.getString(APKeys.JIRA_BASEURL)
def trustedRequestFactory = ComponentAccessor.getOSGiComponentInstanceOfType(TrustedRequestFactory)
def request = trustedRequestFactory.createTrustedRequest(method, url) as TrustedRequest
headers.each { k, v -> request.addHeader(k, v) }
request.addTrustedTokenAuthentication(new URIBuilder(baseUrl).host, currentUser.name)
request.addHeader("X-Atlassian-Token", 'no-check')
if (body != null) {
request.setEntity(new JsonBuilder(body).toString())
}
try {
def response = request.executeAndReturn(new ReturningResponseHandler<Response, Object>() {
Object handle(Response response) throws ResponseException {
if (response.statusCode != HttpURLConnection.HTTP_OK) {
log.error "Received an error while posting to the rest api. StatusCode=$response.statusCode. Response Body: $response.responseBodyAsString"
return null
} else {
return new JsonSlurper().parseText(response.responseBodyAsString)
}
}
})
return response
} catch (Exception e) {
log.warn(e)
return null
}
}
// Retrieve a list of all of the Roadmaps Plans in the system
def response = sendRequest(url, Request.MethodType.GET, ["Content-Type": ContentType.JSON.toString()])
def jsonResp = response ?: [:]
// Filter out the plans to be deleted
def plansToDelete = jsonResp.plans.findAll {
plan -> !toKeep.contains(plan.title.toString())
}
// Delete the unwanted plans
plansToDelete.each {
plan ->
endPointPath = '/rest/jpo/1.0/plans/' + plan.id.toString()
url = baseUrl + endPointPath
sendRequest(url, Request.MethodType.DELETE, ["Content-Type": ContentType.JSON.toString()])
}
The settings or preferences for a given user in Jira Cloud are stored in a number of locations within the system. The User Properties section contains settings relating to which interface elements the user sees or doesn’t see.
For example, when you first access ScriptRunner on a Jira instance, you’re presented with a little quiz. It looks like this:
After you click through this quiz it goes away forever. Someone recently remarked that they’d love to not have their ScriptRunner users be presented with this quiz in the first place.
…Okay, we can make that happen!
First we need to query the user properties for a given user with this code:
def userProps = get("rest/api/2/user/properties")
.header("Accept", "application/json")
.queryString("accountId", "<user account ID>")
.asJson();
return userProps.body
The results look like something like this:
{
"array": {
"empty": false
},
"object": {
"keys": [
{
"self": " https://some-jira-instance.atlassian.net/rest/api/2/user/properties/navigation_next_ui_state?accountId=12345678910abcdef123456",
"key": "navigation_next_ui_state"
},
{
"self": " https://some-jira-instance.atlassian.net/rest/api/2/user/properties/onboarding?accountId=12345678910abcdef123456",
"key": "onboarding"
},
{
"self": " https://some-jira-instance.atlassian.net/rest/api/2/user/properties/project.config.dw.create.last.template?accountId=12345678910abcdef123456",
"key": "project.config.dw.create.last.template"
},
{
"self": " https://some-jira-instance.atlassian.net/rest/api/2/user/properties/sr-post-install-quiz?accountId=12345678910abcdef123456",
"key": "sr-post-install-quiz"
}
]
}
}
The key we’re interested in is the one at the bottom, called sr-post-install-quiz. A new user isn’t going to have this property; it only gets added to the list of user properties after a user has completed the quiz. If we were to somehow add this to the list of user properties for a new user, they’d never see the little quiz in the first place. As far as the system is concerned, they already clicked through the quiz.
We want this change to take effect every time a new user is created, so that the new user never sees the quiz. Therefor, we need to set this as a listener that runs on user create. Here’s the surprisingly simple full code for the listener:
def userId = event.user.accountId.toString()
//Get the userID of the new user from the event
def userProps = put('rest/api/2/user/properties/sr-post-install-quiz?accountId='+userId)
//Feed the userId variable into the PUT request
.header('Accept', 'application/json')
.header('Content-Type', 'application/json')
.body(["value":"your value"])
//The body values don't have to change, the API simply needs something in the body of the request
.asJson()
Worth noting is that the JSON in the body of the request has to be present, even though it has nothing to do with the end result. For whatever reason, this API call demands that some kind of JSON be included.
Much like yesterday’s post about injections, it took me a little bit to figure out what was going on with pipelines and the left shift << operator in Groovy.
In the end, it was simplest for me to say “X << Y basically means feed the data in Y into the function of X”. Pipelines are the closure functions that X would represent.
Let’s look at a Jira-related example:
import com.atlassian.jira.component.ComponentAccessor
// Get the Jira issue manager
def issueManager = ComponentAccessor.getIssueManager()
// Get all the issues in a target project
def issues = issueManager.getIssueObjects(issueManager.getIssueIdsForProject(10100))
log.warn("This is the full list of issues: " + issues)
//Assume one of those issues, ZT-4, has a priority of "lowest"
//Define a new pipeline into which we'll feed the list of issue objects
def pipeline = {
it.findAll { !(it.priority.name == "Lowest") }
}
// Apply the pipeline to the list of issues
def filteredAndSortedIssues = pipeline << issues
return filteredAndSortedIssues
So what are we doing? First we’re grabbing a list of issues from a target project. Then we’re defining a pipeline, which is simply a closure that performs an operation on an as-yet undefined set of data.
Finally, we’re invoking the pipeline and feeding it the list of issues using that left shift << operator.
The result should be a list of all of the issues that do not have a priority of “Lowest”.
It’s easy to see how this would reduce code reuse. You’ve got a declared function, the pipeline, and you could feed it different arrays of issues.
Of course, there are depths to this functionality that I’ve not yet begun to explore. But as far as a basic idea of what a pipeline and a left shift operator are up to, this is the idea.
When I sat down to write about injections, I thought it’d be a quick little blog post. However, it took me a lot longer than I expected to get my head around even the basic concept of what the injection was actually doing. I get the general idea now, but I don’t see myself putting them into action in my code very often.
This is the best example I could find of an injection in action.
Here’s a quick exploration of some injection code, as I understood it:
(1..5).inject(1) { runningTotal, itemInRange ->
log.warn("$runningTotal * $itemInRange = ${runningTotal * itemInRange}")
log.warn("The running total is $runningTotal and the current item from the range is $itemInRange")
runningTotal * itemInRange
}
The results looked like this:
2023-02-22T03:55:55,063 WARN [runner.ScriptBindingsManager]: 1 * 1 = 1
2023-02-22T03:55:55,063 WARN [runner.ScriptBindingsManager]: The running total is 1 and the current item from the range is 1
2023-02-22T03:55:55,063 WARN [runner.ScriptBindingsManager]: 1 * 2 = 2
2023-02-22T03:55:55,063 WARN [runner.ScriptBindingsManager]: The running total is 1 and the current item from the range is 2
2023-02-22T03:55:55,063 WARN [runner.ScriptBindingsManager]: 2 * 3 = 6
2023-02-22T03:55:55,063 WARN [runner.ScriptBindingsManager]: The running total is 2 and the current item from the range is 3
2023-02-22T03:55:55,063 WARN [runner.ScriptBindingsManager]: 6 * 4 = 24
2023-02-22T03:55:55,063 WARN [runner.ScriptBindingsManager]: The running total is 6 and the current item from the range is 4
2023-02-22T03:55:55,063 WARN [runner.ScriptBindingsManager]: 24 * 5 = 120
2023-02-22T03:55:55,063 WARN [runner.ScriptBindingsManager]: The running total is 24 and the current item from the range is 5
The first result is simply the injected value 1 times the first value in the range, which is also one. The next result is the current value of the running total, 1, times the current value in the range, 2. It iterates through the values in the range like so, keeping a running tally of the results. I suppose if you wanted to sum up the total of a list of integers, you could find a way to make this useful. Me, I’ll stick to my trusty += operations.
Threading is a fantastic and (relatively) simple way to run parallel HTTP requests against a Jira or Confluence instance. Multithreading can drastically cut down the length of time it takes for a piece of code to run, as each HTTP request is not waiting for the previous request to finish. In Groovy we can achieve threading by using the Thread class or the ExecutorService framework.
However, threading isn’t a viable solution when it comes to Jira (or Confluence) on the Cloud. Because Jira Cloud is a shared hosting environment, there are a number of reasons why Atlassian has put strict limitations on the use of threading. Chief among these concerns are performance and security. If anyone can run any number of threads that they want, this necessarily impacts the other users of the shared hosting environment.
Similarly, multithreading in a shared hosting environment can cause data inconsistencies and synchronization issues, which easily lend themselves to visions of major security issues.
Though we cannot use threading on Jira Cloud, we can use async. Simply put, the difference between the two is that threading uses concurrent or parallel threads of execution to achieve concurrency. That is, it literally splits the tasks into separate requests to the CPU.
Async, on the other hand, uses non-blocking code to achieve concurrency. That is, the script simply says “okay, we’re going to run this part of the script in the background, and when it’s done we’ll collect the results.” Async still technically uses threads, but it uses them in a way that is different than the ExecutorService. The material difference between the two is that threading creates new threads, which uses up resources. Async uses whatever threads are available from the existing thread pool. What’s important to know is that the Atlassian Cloud supports async, and so that’s what we use.
Let’s look at an example.
The process to actually declare an asynchronous method isn’t terribly complicated. The script below essentially does two things: it creates HTTP connections using an array of URLs, and it adds those tasks to a collection of asynchronous tasks. When the tasks have each completed, the results are added to a string buffer. When all the async tasks have completed, the contents of the string buffer are returned.
The process of creating the HTTP requests isn’t new or complicated, we do that all the time when working with the REST API. The only complication here is understanding the method that starts the future tasks, and understanding the structure of the loop that runs the requests.
import java.net.HttpURLConnection
import java.net.URL
import groovy.transform.CompileStatic
import java.util.concurrent.FutureTask
@CompileStatic
def async (Closure close) {
def task = new FutureTask(close)
new Thread(task).start()
return task
} //Tell Groovy to use static type checking, and define a function that we use to create new async requests
String username = "<username>"
String password = "<password>"
//Define the credentials that we'll use to authenticate against the server/DC version of Jira or Confluence
//If we want to authenticate against Jira or Confluence Cloud, we'd need to replace the password with an API token, and replace the username with an email address
// Define a list of URLs to fetch
def urls = [
"<URL>",
"<URL>",
"<URL>",
"<URL>",
"<URL>",
]
// Define a list to hold the async requests
def asyncResponses = []
def sb = []
// Loop over the list of URLs and make an async request for each one
urls.each {
url ->
//For each URL in the array
def asyncRequest = {
//Define a new async request object
HttpURLConnection connection = null
//Define a new HTTP URL Connection, but make it null
try {
// Create a connection to the URL
URL u = new URL(url)
connection = (HttpURLConnection) u.openConnection()
connection.setRequestMethod("GET")
//Create a new HTTP connection with the current URL, and set the request method as GET
connection.setConnectTimeout(5000)
connection.setReadTimeout(5000)
//Set the connection parameters
String authString = "${username}:${password}"
String authStringEncoded = authString.bytes.encodeBase64().toString()
connection.setRequestProperty("Authorization", "Basic ${authStringEncoded}")
//Set the authentication parameters
// Read the response and log the results
def responseCode = connection.getResponseCode()
def responseBody = connection.getInputStream().getText()
logger.warn("Response status code for ${url}: ${responseCode}")
sb.add("Response body for ${url}: ${responseBody}")
} catch (Exception e) {
// Catch any errors
logger.warn("Error fetching ${url}: ${e.getMessage()}")
} finally {
// Terminate the connection
if (connection != null) {
connection.disconnect()
}
}
}
asyncResponses.add(async (asyncRequest))
}
// Wait for all async responses to complete
asyncResponses.each {
asyncResponse ->
asyncResponse.get()
}
return sb
In an attempt to become a stronger user of Groovy and Jira, I’m challenging myself to learn something new each day for 100 days. These aren’t always going to be especially long blog posts, but they’ll at least be something that I find interesting or novel. If we want to work with the elements in a collection, we have a number of options. My favourite method is to use a closure with .each, which could be as simple as this:
def eachList = [5, 10, 15]
eachList.each{element ->
log.warn(element.toString())
}
The closure allows us to iterate through each element in the collection. Groovy also has a .collect method. Implementing it would look something like this:
def collectList = [1, 2, 3]
def squaredCollectList = collectList.collect { element ->
element * element
}
return squaredCollectList
So what’s the practical difference?
With .each, we’re simply iterating through a collection of elements that already exists. With .collect, we’re defining a new collection (squaredCollectList). We then iterate through all of the elements of the predefined list (collectList), square the element, and add the result to the newly defined collection. In simple terms, .each iterates through a list. .collect iterates through a list and adds the transformed elements to a new collection. The collect method is useful if you want to easily transform an entire collection of objects into a collection of a different type of objects. The each method is useful if you simply want to iterate through a collection and do something with each element.
TrustedRequestFactory is a Jira-supplied way of authenticating against Jira itself. If I wanted to authenticate against the current instance, or an external instance, this is what I’d consider using.
This script iterates through a collection of Jira URLs, and processes them as TrustedRequestFactory GET requests in parallel. This is useful in cases when a large number of requests need to be submitted as HTTP; instead of waiting for each one to finish in turn, we can run them all at once.
As it stands, this script authenticates using the logged-in user. However, this could be amended pretty easily by simply adding the correct type of authentication as a header on the request. As well, if you wanted to authenticate against a different Jira instance, the URL structure would need to be amended slightly.
I’ve commented the code as best I can, but quite frankly I’m still learning about parallel execution myself. I’m next going to dig into async vs. threading, and see what I can discover.
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.config.properties.APKeys
import com.atlassian.sal.api.net.Response
import com.atlassian.sal.api.net.ResponseException
import com.atlassian.sal.api.net.ReturningResponseHandler
import com.atlassian.sal.api.net.TrustedRequest
import com.atlassian.sal.api.net.TrustedRequestFactory
import com.atlassian.sal.api.net.Request
import groovy.json.JsonSlurper
import groovyx.net.http.ContentType
import groovyx.net.http.URIBuilder
import java.net.URL;
import java.util.concurrent.Callable
import java.util.concurrent.Executors
def currentUser = ComponentAccessor.jiraAuthenticationContext.loggedInUser
def baseUrl = ComponentAccessor.applicationProperties.getString(APKeys.JIRA_BASEURL)
def trustedRequestFactory = ComponentAccessor.getOSGiComponentInstanceOfType(TrustedRequestFactory)
//We use an instance of TrustedRequestFactory to authenticate against Jira
def endPointPath = '/rest/api/2/issue/'
def responses = []
//Define an array to hold the responses as they come in
def urls = [
baseUrl + endPointPath + "DEMO-40",
baseUrl + endPointPath + "DEMO-41",
baseUrl + endPointPath + "DEMO-42"
]
def responseHandler = new ReturningResponseHandler < Response, Object > () {
Object handle(Response response) throws ResponseException {
if (response.statusCode != HttpURLConnection.HTTP_OK) {
log.error "Received an error while posting to the rest api. StatusCode=$response.statusCode. Response Body: $response.responseBodyAsString"
return null
} else {
def jsonResp = new JsonSlurper().parseText(response.responseBodyAsString)
responses.add(jsonResp)
}
}
}
//Define a response handler that we'll feed to the executeAndReturn method
//This is also where the information from the request is returned. It gets stored in the responses array as a collection of JSON objects
def executeRequest = {
url ->
def request = trustedRequestFactory.createTrustedRequest(Request.MethodType.GET, url) as TrustedRequest
//This works, not sure why the script console complains about it
request.addTrustedTokenAuthentication(new URIBuilder(baseUrl).host, currentUser.name)
request.addHeader("Content-Type", ContentType.JSON.toString())
request.addHeader("X-Atlassian-Token", 'no-check')
request.executeAndReturn(responseHandler)
} //Define an execution request. This includes the creation of the request, and the authentication parameters
def executor = Executors.newFixedThreadPool(urls.size())
//Define an Executor and thread pool, the size of which is determined by the number of URLs we predefined
def futures = urls.collect {
url ->
executor.submit(new Callable < Object > () {
Object call() throws Exception {
executeRequest(url)
}
})
}//Iterate through the collection of URLS, submitting each as a callable object
executor.shutdown()
executor.awaitTermination(Long.MAX_VALUE, java.util.concurrent.TimeUnit.SECONDS)
//Shut down the Executor after all of the URLs have been processed
responses.each {
response ->
log.warn(response.self)
}//Iterate through the responses
(If you haven’t read the previous post in this series, I highly recommend starting with that.)
In order to make use of the Jira API documentation, we need to understand what classes, packages, and methods are. That’s because the JIRA API is documented, but only in the most technical sense. It is predicated on you having some knowledge of Java. You can go to https://docs.atlassian.com/software/jira/docs/api/latest and get information about all of the internal libraries that Jira uses to function, but it’s not much good to you without that prior knowledge.
At the same time, knowing how to read and make use of the API documentation is a vital skill when it comes to working with ScriptRunner. All of Jira’s functionality is available for you to use, but only if you can harness it through the power of classes.
The problem I ran into when I was first starting to learn how to use ScriptRunner is that very little is written in a context that a beginner could make use of, and few examples are provided. A lot of the Jira’s internal functions are interdependent, and someone who is new to both Groovy and Jira might find themselves struggling to make sense of it.
While I now have some experience in making use of the documentation, a great deal of what I do is still trial and error. I have found much more success in identifying a library or class that might do what I need it to do, and then Googling for examples of people putting that class into active use. I’ve often found scripts that include both the primary class I was looking at, and the necessary related classes to make it useful.
Even still, sometimes I run into a problem I’m not sure how to solve. This weekend I was trying to implement a package that is fully referenced in the documentation, because I was working on a blog post about unit testing with Groovy. The package is com.atlassian.jira.functest.framework.assertions.* No matter what I did, ScriptRunner just couldn’t find it. As it turns out, even if a library is documented, that doesn’t mean that it’s necessarily available to all apps. That’s just how it be.
Let’s briefly look at how we might make use of a library or class that is available to an app like ScriptRunner.
Jira is written in Java, and uses a collection of Java code to run. The code is made up of classes, which are collections of methods, which are literally the method by which Jira operates. Packages are collections of classes.
In other words, whenever Jira wants to do something it calls upon a method, which is stored in a class. Related classes (such as classes involved in managing or working with projects) are stored in a collection called a package.
There’s a lot more to Java classes and packages than that, but that’s enough to get us started.
So you’ve got your ScriptRunner script console open, and you’ve imported your Component Accessor. You’ve even declared the Component Accessor as something you’d like to work with. Now what?
import com.atlassian.jira.component.ComponentAccessor
def component = ComponentAccessor.getComponent()
If you put your cursor between the brackets after getComponent() and hit any letter, you’ll get a list of available methods that start with that letter. However, none of them are particularly useful. To do anything with the Component Accessor, we need to import a package or class with which we’d like to work. In this example, we’re going to use a class that allows us to work with projects. In this case, let’s look at the documentation for the ProjectManager class.
If you click the link above, you’ll find yourself on a page with a whoollllle lot of information. Once you know what you’re looking at and what to look for, I promise it’ll make more sense.
On that page, if we look above where it says Interface ProjectManager, you’ll see that it says com.atlassian.jira.project. That is the package in which the ProjectManager class lives (remember, related classes are often grouped together in a package). ProjectManager isn’t the only class that lives within this package, but it’s the one we’re after.
From this page, or any similar page in the documentation, we can assemble our import. We start by importing the package, and then the class. In this case, the class is ProjectManager:
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.project.ProjectManager
def component = ComponentAccessor.getComponent()
So we’ve told the ScriptRunner console that we wish to import the ProjectManager class, which is stored in the Project package.
We’re not actually making use of the ProjectManager class yet, but the script console is now aware that you’d like to use it in your script. If you wanted to import every class in the package, you could simply import com.atlassian.jira.project.*. For the sake of simplicity, however, we’ll stick to importing the single class.
Once again put your cursor between the brackets () after getComponent.
Now type the name of the class that you’d like the Component Accessor to access. In this case, you’re typing ProjectManager (capitalization is important). The words you type should be a nice shade of green. While we’re at it, let’s rename the object you’re creating from component to projectManager. Notice the difference in capitalization between our reference to the class, and the name we gave the object. This is enough to differentiate them, as Groovy is case-sensitive:
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.project.ProjectManager
def projectManager = ComponentAccessor.getComponent(ProjectManager)
We now have an object that instantiates (is an instance of) the ProjectManager class, and we can move on to exploring its methods.
Before we go any further, let’s recap. We’ve imported our Component Accessor, with which we’ll access the functions of the Jira internal libraries. We’ve also imported the ProjectManager class, and used the Component Accessor to access it and to create an object called projectManager.
Now what?
We have our class, and we have an object that contains the details of that class. Now we get to access the methods of the ProjectManager class. These methods are literally the functions by which the class accomplishes what it is designed to do.
Going back to the Jira documentation page for ProjectManager, we see a section called Method Summary. Below this is a list of all the methods available to us through the ProjectManager class.
Some methods take input, some don’t. For example, there’s a method in the list called .getProjectObjects() There’s nothing between the brackets, so we know that it’s not expecting input of any kind. On the other hand, the method called getProjectObj(Long id) is expecting input that is of the type long.
Let’s experiment. If you want to make use of a method, simply tack it on to the end of the object that instantiated the ProjectManager class. So in this case, our object is projectManager:
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.project.ProjectManager
def projectManager = ComponentAccessor.getComponent(ProjectManager)
return projectManager.getProjectObjects()
By running this code, the data that is returned should be the project Keys of every project in the instance (your list will be different):
[Project: DELETEME, Project: DESS, Project: DES, Project: DEMO, Project: DMCA, Project: DMCB, Project: KDES, Project: NMC, Project: SD]
This is the essence of working with class imports and objects in Jira: importing the class in question, accessing it with the Component Accessor, and then using the resulting object to access the class methods.
I encourage you to explore some of the other classes listed in the documentation. Maybe try to find a class that would let you work with issues, and explore its methods (hint: its class name is very similar to the class we used today).
The only way to really learn Groovy and to become a confident ScriptRunner user is to get your hands dirty, and to ask questions!This morning I was playing around with the assert statement in ScriptRunner, trying to understand the nuances of it. I was getting frustrated because I wanted the assertion to fail, but then for the script to keep going. I couldn’t figure out how to capture the error without having the script stop.
I tried a few things. I know you can include a log statement to the end of an assertion, like so:assert (1 == 2) : "Assertion failed"
java.lang.AssertionError: Assertion failed. Expression: (1 == 2)
at org.codehaus.groovy.runtime.InvokerHelper.assertFailed(InvokerHelper.java:438)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.assertFailed(ScriptBytecodeAdapter.java:670)
at Script205.run(Script205.groovy:2)
But even with the inclusion of the log statement, my script would grind to a halt when the assertion failed. I also tried putting the assertion in a try/catch statement, but that didn’t work at all.
I then realized that I had no idea what the functional difference between an assertion and a try/catch statement actually was. Moreover, I didn’t know when it was appropriate to use one or the other.
From what I read, it seems that assertions are generally for testing code, and aren’t appropriate in production. Try/catch statements are for catching exceptions or edge cases. They two are not actually equivalent or interchangeable.
If you want to make assertions in production code, Groovy allows you to make assertions without using the assert statement. For example, the below code evaluates a statement, but instead of bringing the script to a halt it returns the result of the comparison:
def test = 1 == 2
return "Test result: ${test.booleanValue()}"
This would allow the code to keep running, and to react to the failure of the assertion accordingly.
This is an extension of a previous post that I did, on returning all of the users in a Jira Cloud instance. The request was for a script that returns all users in a Jira Cloud instance who have been inactive for over 30 days. Unfortunately, only part of this request is doable. The Jira Cloud REST API does not currently provide any way of returning login or access timestamp information. The most it will tell you is whether a user account is active or inactive. The only way to get access information is to export the Jira userbase through the Atlassian portal. So the script below simply returns all Jira users in a Cloud instance who have an inactive account. If/when the day comes that Atlassian adds login information to the API, I’ll update this post.
import groovy.json.JsonSlurper
def sb = []
def page = 0
def lastPage = false
while (lastPage == false) {
//run this loop until we detect the last page of results
def x = 0
//Every time the loop runs, set x to 0. We use x to count the number of users returned in this batch of results
def getUsers = get("/rest/api/2/user/search?query=&maxResults=200&startAt=" + page)
.header('Content-Type', 'application/json')
.asJson()
//Get the current batch of users as an HTTP GET request
def content = getUsers.properties.rawBody
//Get the body contents of the HTTP response
InputStream inputStream = new ByteArrayInputStream(content.getBytes());
String text = new String(inputStream.readAllBytes());
def parser = new JsonSlurper()
def json = parser.parseText(text)
//Convert the resulting bytestream first to a string, and then to JSON So we can work with it
json.each {
userAccount ->
if (userAccount.active == false) {
//We only want users who aren't active
//For each result in the JSON
sb.add(userAccount.displayName.toString() + " has an inactive account")
//write the user account ID to the log
}
}
if (json.size() < 200) {
lastPage = true
logger.warn("Setting lastPage to true")
//If the number of users in the current batch is less than 200, we must have reached the end and we can kill the loop
} else {
page += 200
//Otherwise, increase pagination by 200 and keep going
}
}
return sb
A long long time ago, I posted a complaint to this blog about how I had no idea what a Jira SearchContext was, and how it was very frustrating to try to figure it out.
Yesterday I realized that I had never really made any strides toward fixing my lack of knowledge around the search functionality of Jira. I’m not talking JQL, I’m talking the actual Search Service waayyy down in the guts of Jira.
I set out wanting to update the permissions for a list of filters on Jira Server, based on the name of the server. The list of filter names came from a migration we were doing from Jira Server to Jira Cloud. JCMA returned a list of the names of forty filters that were publicly accessible. My task was to go in and update those permissions. As I was going to have to repeat this process multiple times, it seemed an excellent candidate for a scripted solution.
As it turns out, it is extremely difficult to search for filters by name. You can search for filters by their ID without issue, but searching by name is essentially not possible.
My next idea was to simply wipe public access from ALL filters on the system. I figured I’d be able to call some method like “getFilters()” and simply have a list of all the filters. Nope! You need to use the search service, and that’s where my current knowledge ran out, and I had to learn something new.
The work below is based on a script from the Adaptavist library, linked here. It was immensely helpful in getting started. Most of the comments in the script below are mine, simply by way of my trying to explain to myself what was going on at each step. I believe that this script is a good stepping stone toward learning how to use the search to return other types of items; the service is not limited to searching for types of shares.
The actual script simply searches for globally shared items, and updates the permissions on any of those items so that members of jira-users can access the shared item. In this case the update is limited to changing the permissions of filters.
import com.atlassian.jira.bc.JiraServiceContextImpl
import com.atlassian.jira.bc.filter.SearchRequestService
import com.atlassian.jira.bc.portal.PortalPageService
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.portal.PortalPage
import com.atlassian.jira.sharing.SharePermissionImpl
import com.atlassian.jira.sharing.SharedEntity
import com.atlassian.jira.sharing.search.GlobalShareTypeSearchParameter
import com.atlassian.jira.sharing.search.SharedEntitySearchParametersBuilder
import com.atlassian.jira.sharing.type.ShareType
import com.atlassian.sal.api.ApplicationProperties
import com.atlassian.sal.api.UrlMode
import com.onresolve.scriptrunner.runner.ScriptRunnerImpl
def searchRequestService = ComponentAccessor.getComponent(SearchRequestService)
def currentUser = ComponentAccessor.jiraAuthenticationContext.loggedInUser
def applicationProperties = ScriptRunnerImpl.getOsgiService(ApplicationProperties)
def portalPageService = ComponentAccessor.getComponent(PortalPageService)
def globalPermissionManager = ComponentAccessor.globalPermissionManager
def contextPath = applicationProperties.getBaseUrl(UrlMode.RELATIVE)
def serviceContext = new JiraServiceContextImpl(currentUser)
//Define a new service context. Essentially, tell Jira that we want to define a search, and who it should run as.
def searchParameters = new SharedEntitySearchParametersBuilder().setShareTypeParameter(GlobalShareTypeSearchParameter.GLOBAL_PARAMETER).toSearchParameters()
//We're defining a series of parameters that will return items with the matching share permissions
//AuthenticatedUserShareTypeSearchParameter, GlobalShareTypeSearchParameter, GroupShareTypeSearchParameter, PrivateShareTypeSearchParameter, ProjectShareTypeSearchParameter
searchRequestService.validateForSearch(serviceContext, searchParameters)
//Validate the search before performing it
assert!serviceContext.errorCollection.hasAnyErrors()
//Confirm that the validation returned no errors
def searchFilterResult = searchRequestService.search(serviceContext, searchParameters, 0, Integer.MAX_VALUE)
//Actually perform the search, and save the results in an object
//The four parameters are the serviceContext (already established), the search parameters (already established), page position, and page width
//In other words, the last two integers are for pagination
final authenticatedUserSharePerms = new SharedEntity.SharePermissions([new SharePermissionImpl(null, ShareType.Name.GROUP, "jira-users", null)] as Set)
//SharedEntity is a class for any entity that can be shared or favorited
//https://docs.atlassian.com/software/jira/docs/api/7.2.0/com/atlassian/jira/sharing/SharedEntity.html
//We're definine a new set of share permissions:
//https://docs.atlassian.com/software/jira/docs/api/7.6.1/com/atlassian/jira/sharing/SharePermissionImpl.html
searchFilterResult.results.each {
filter ->
filter.setPermissions(authenticatedUserSharePerms)
//Set the new permissions on the filter
def filterUpdateContext = new JiraServiceContextImpl(currentUser)
//Define the context in which this update will be performed
//If we try to run the update as the filter.owner, it doesn't work with filters that are exposed to anyone on the web
searchRequestService.updateFilter(filterUpdateContext, filter)
//Actually perform the update, using the context and running against the filter
log.warn("Updating permissions for filter: ${filter.name}")
if (filterUpdateContext.errorCollection.hasAnyErrors()) {
log.warn("Error updating filter - possibly owner has been deleted. Just delete the filter. " + filterUpdateContext.errorCollection)
}
}
So you or your organization have decided to purchase, or at least trial, ScriptRunner. Now what?
If you’re going to learn how to use ScriptRunner, I would very strongly advise that you set it up in a test environment. This allows you to learn to use the tool, and to not have to worry about making mistakes or deleting production data.
In order to use ScriptRunner, you’ll need to be an admin on the Jira system in question. As well, I’ll do my best to explain the Groovy code that we’re using, but it would be of great benefit for you to read some primary learning materials on the subject. Some of what I’ll say will assume a basic knowledge of object-oriented programming principles. Please note that Groovy is case-sensitive, but is agnostic to whitespace.
All of the testing and learning we’re going to be doing is done in the ScriptRunner Script Console. You can think of this as sort of a window into your Jira system, in which Groovy code can be run. The Script Console is accessed by going to your Jira System Setings > Manage Apps > ScriptRunner > Script Console. ScriptRunner has many more features than just the Console, but this is where we’ll go to run one-off code:
So you’ve accessed the Script Console. It’s pretty empty. Now what?
In this post we’re going to learn how to use ScriptRunner to return the key of every project in a Jira instance.
Let’s talk about imports. Imports are a standard aspect of almost every programming language, and they allow you to bring in additional functionality in the form of code libraries. Libraries are simply a preconfigured set of tools or functions available to anyone who wants to use them in their code. Almost any Groovy script that you want to write for Jira is going to involve the Component Accessor, so let’s start by importing that library with the following import statement:
import com.atlassian.jira.component.ComponentAccessor
On its own, this doesn’t do anything. But it allows us to use the Component Accessor class in our code, and the Component Accessor is our gateway to all of Jira’s functionality. Think of the Component Accessor as almost like a translator. Jira’s code library has many hundreds and hundreds of functions, but the Component Accessor is what allows the ScriptRunner Script Console to make use of those libraries. Let’s look at an example.
Still with our import statement, we’re going to use the Component Accessor to access a part of the Jira library. Add the following statement below your import statement:
def allProjects = ComponentAccessor.GetProjectManager()
Let’s examine what we just added. def is short for define, and is telling the Script Console that we want to define a new object. allProjects is the name we’re giving to the new object, as our intention is to have it hold a record of all of the projects in the system. Notice what comes next. We have invoked the ComponentAccessor, and accessed a method of that class called projectManager. If you simply type ComponentAccessor. and stop typing after the period, the Script Console should suggest to you a list of all of the available methods that the ComponentAccessor can currently access.
If we were to add an import statement, further methods would become available to the ComponentAccessor, and would appear in the list. This manner of accessing the specifics of something by referring to the object with a period is foundational to using Groovy and ScriptRunner.
Okay so we used our ComponentAccessor to create an object called allProjects, which is actually of the type ProjectManager. What use is that?
Under the code you’ve already added, type def projects = allProjects., not forgetting to include the period. As before, we’ve declared a new object. This object will contain some aspect of the allProjects object. After the dot, we’re presented with a list of methods or functions that the allProjects object can impart upon our newly declared object. The one we’re interested in is simply called projects:
That leaves us with a script console that should contain the following:
import com.atlassian.jira.component.ComponentAccessor
def allProjects = ComponentAccessor.projectManager
def projects = allProjects.projects
Finally, we’ve reached the point in the script where we have something concrete to work with. We now have a collection of objects, and we called that collection projects. Each item in that collection is itself a collection of information about one project in the instance.
Because projects is a collection of pieces of information, we’ll need to tell the Script Console to return each piece one at a time. There are many ways to do this, but we’re going to do it with a closure. Consider the following code:
import com.atlassian.jira.component.ComponentAccessor
def allProjects = ComponentAccessor.projectManager
def projects = allProjects.projects
projects.each{ project ->
log.warn(project.key)
}
What we’re telling the Script Console is that for each item in the collection of items called project, we want to do something with that item. We’re looping or iterating through the collection of items. Each time we loop and a new item is returned from the collection, we’re simply referring to that item as project. We could have used any word instead of project, but that word makes the most sense in this context.
So the closure loops through each item, referring to each item as project as it considers them in turn. We’ve told the closure that for each instance of something called project, we want the project key to be logged in the log file. We could have told it to return any aspect of the project, so long as it was available to us using that same dot notation.
In fact, within that closure we could have done most anything we wanted to the project objects. Returning information about them is the simplest example, but the object is yours to do with as you wish. Speaking of output, let’s look at the expected format. Right below the blue RUN button on your screen, it should say Result and Logs. The information your script outputs will appear in one of these two places, depending on how you ask the script to return data. In this case we want the Log tab, since we sent information to the log file:
2023-02-03 21:23:47,232 WARN [runner.ScriptBindingsManager]: DELETEME
2023-02-03 21:23:47,256 WARN [runner.ScriptBindingsManager]: DESS
2023-02-03 21:23:47,256 WARN [runner.ScriptBindingsManager]: DES
2023-02-03 21:23:47,257 WARN [runner.ScriptBindingsManager]: DEMO
2023-02-03 21:23:47,257 WARN [runner.ScriptBindingsManager]: DMCA
2023-02-03 21:23:47,257 WARN [runner.ScriptBindingsManager]: DMCB
2023-02-03 21:23:47,257 WARN [runner.ScriptBindingsManager]: KDES
2023-02-03 21:23:47,257 WARN [runner.ScriptBindingsManager]: NMC
2023-02-03 21:23:47,257 WARN [runner.ScriptBindingsManager]: SD
Your output will undoubtedly be different, as the keys of your projects will be different, but that’s it!
So what have we done? We imported the Component Accessor, and used that to access the Jira ProjectManager library. We created a ProjectManager object called projects. We told the Script Console to tell us about each of the objects in project but that we only wanted to know about the key of each of those items. The Script Console printed the key of each item in projects to the log, and we got a list of every project key in Jira.
I encourage you to examine some of the methods and information available to you. Try accessing information about the projects other than the key.
The next blog post in this series will focus on a more complex example of how an admin might make use of ScriptRunner to return information about a project.
I am entirely self-taught when it comes to ScriptRunner and Groovy. Everything I’ve learned has been through trial and error, and Googling different permutations of words until I find a solution.
A great deal of the information out there assumes that you already know how to work with ScriptRunner, Groovy, and Jira or Confluence. I found this to be terrifically frustrating when I first started, as I did not have the requisite knowledge to make use of the information that I was finding. I didn’t have the skills to put it into context, never mind making use of it in the specific use case to which I was trying to apply it.
For that reason, I’m going back to the beginning. I’m starting an ongoing series of blog posts about how to get started with ScriptRunner for both Jira and Confluence. You need to learn to walk before you can run, so for that reason I am calling this series the ScriptWalker series.
Not only will this hopefully be a resource for persons just starting out with ScriptRunner, but it will also force me to be sure that I can teach what I’m doing. In the end, that will make me a stronger user of the tools.
Some of the information in this series will be relevant to both Jira and Confluence. Some of it will be specific to one platform or the other. What I’m going to strive to do is provide examples for Cloud and for Server/DC. They’ll be separate blog posts, but hopefully posted at the same time, or at least close together.
ScriptRunner isn’t cheap, but it’s an amazing tool that extends Jira and Confluence pretty much as far as your imagination will allow you to envision. There are trial licenses available for Jira and Confluence on both Server/DC and Cloud.
Finally, if you have a topic or a question that you’d like me to cover as part of this series, please leave a comment or send me a message on LinkedIn!
Adaptavist has a tool called Microscope that Jira admins can use to look into the specifics of various aspects of the system, including workflows. If you’re looking to examine an instance, I recommend using Microscope rather than writing your own script.
However, it was requested that I look into workflows in a very specific way: the requester needed a tool that would take the name of a group and search all of the workflows for any references to that group. In effect, they were searching for dependencies on that group within the workflows. This script does not do that, but this is the basis upon which that script is built. This script takes a workflow and returns the validator conditions for all of the transitions. This could easily be adjusted to return the conditions for the triggers, etc.
import com.atlassian.jira.workflow.WorkflowManager
import com.atlassian.jira.component.ComponentAccessor
def workflowManager = ComponentAccessor.getComponent(WorkflowManager)
def wf = workflowManager.getWorkflow("<workflow name>")
//wf is the workflow itself
def statuses = wf.getLinkedStatusObjects()
//statuses are the statuses available to that workflow
statuses.each {
status ->
//For each status associated with the workflow
def relatedActions = wf.getLinkedStep(status).getActions()
//relatedActions is the set of actions (transitions) associated with each status
//log.warn("For the status " + status.name + " the available transitions are: " + relatedActions)
relatedActions.each {
it ->
//For each related transition, we need the validators associated with the
def validators = it.properties
validators.validators.each {
valid ->
log.warn(status.name + " has the following validator arguments: " + valid.properties.args)
}
}
}
This was actually an interesting problem to solve. Atlassian don’t seem to want anyone returning all of the users in an Jira instance through the API. There’s supposedly a method for doing this, but it doesn’t work if you’re running the script through a Connect app like ScriptRunner. This is another method that only works with an external script, as was the case with managing Cloud Confluence Space permissions.
Instead what we do is run an empty query through the user search. However, this presents its own set of challenges, as the body is only returned in a raw format. That is, instead of returning JSON, the HTTP request is returned as a byte stream.
So after running the query, we turn it into text and then use the JSON Slurper to turn it into a JSON object with which we can work.
Despite the strangeness of the raw response, pagination, startAt, and maxResults still work, and are necessary to get all of the results. Additionally, there is no flag in the HTTP response that pertains to the last page of results, such as “lastPage”. Therefor we must determine the final page of results ourselves. The script starts by asserting that the lastPage flag is false. That is, the main script loop will run until this is no longer true. It also asserts that page is 0; we’ll increment this to get every subsequent set of results. By default, only 50 results are returned. The main program loop is then initiated.Next, an HTTP GET request is made with an empty user search query. This will give us a bytestream containing all of the users.
That bytestream is then read into a string, which in turn is JSONified, so that we may work with its elements.
For each resulting user in the JSON blob, we simply note the result in the logs. Any action could be taken at this point, using the user’s account ID.We’re incrementing by 200 results each time. Therefor if the results returned contained less than 200 users, it must be the last page of results, and we can end the loop by setting the lastPage flag to true. If it’s not the last page, we increment the page results by 200 and loop again.
import groovy.json.JsonSlurper
def page = 0
def lastPage = false
while (lastPage == false) {
//run this loop until we detect the last page of results
def x = 0
//Every time the loop runs, set x to 0. We use x to count the number of users returned in this batch of results
def getUsers = get("/rest/api/2/user/search?query=&maxResults=200&startAt=" + page)
.header('Content-Type', 'application/json')
.asJson()
//Get the current batch of users as an HTTP GET request
def content = getUsers.properties.rawBody
//Get the body contents of the HTTP response
InputStream inputStream = new ByteArrayInputStream(content.getBytes());
String text = new String(inputStream.readAllBytes());
def parser = new JsonSlurper()
def json = parser.parseText(text)
//Convert the resulting bytestream first to a string, and then to JSON So we can work with it
json.each{userAccount ->
//For each result in the JSON
logger.warn(userAccount.accountId.toString())
//write the user account ID to the log
}
logger.warn("Current batch of users contained: " + json.size().toString())
if (json.size() < 200) {
lastPage = true
logger.warn("Setting lastPage to true")
//If the number of users in the current batch is less than 200, we must have reached the end and we can kill the loop
}else{
page += 200
//Otherwise, increase pagination by 200 and keep going
}
}
I spend a fair amount of time writing listeners with ScriptRunner, so that Jira will do various things for me based on certain criteria. For example, I write a lot of listeners that listen for when an issue is updated, and take action accordingly.
Until today, however, I had never thought how I might leverage the system to determine what kind of update had triggered the event. I always just wrote the criteria in by hand, so that the listener would ignore anything I didn’t want eliciting a reaction.The listener I was working on today got so complex, had so many nested IF statements and conditions, that it occurred to me to search for a better way.
As it turns out, the event object contains a lot of information, including which field’s change triggered the listener.
In my own example, I was looking at the labels field. I wanted the script to send a Slack message if the issue had been updated, but only if the labels had changed in a certain way.The line of code required to check the type of update event doesn’t even require an import:
def change = event?.getChangeLog()?.getRelated("ChildChangeItem")?.find {it.field == "labels"}
if (change) {
//Do something
}else{
/Do something else
}
The code is pretty simple, and comes courtesy of an answer on the Atlassian customer forums. Change is defined as an object that exists if the event change log contains the label field. If the change log contains the label field, then that field was at least one of the fields that was updated to trigger the update event.
With this, I was able to greatly simplify the code required to ensure that the listener was reacting appropriately. It’s a small thing, but it makes a big change!In my previous post I explored how to access the Confluence Cloud Space Permissions API endpoint.
This Python script extends that, and gives a user a permission set in all Spaces in Confluence. This could be useful if you wanted to give one person Administrative rights on all Spaces in Confluence, for example.
Note that the user must first have READ/SPACE permission before any other permissions can be granted.
from requests.models import Response
import requests
import json
headers = {
'Authorization': 'Basic <Base-64 encoded username and password>',
'Content-Type': 'application/json',
'Accept': 'application/json',
}
userID = '<user ID (not name)>'
url='https://<url>.atlassian.net/wiki/rest/api/space/'
resp = requests.get(url, headers=headers)
data = json.loads(resp.text)
for lines in data["results"]:
url="https://<url>.atlassian.net/wiki/rest/api/space/"+lines["key"] + '/permission'
dictionary = {"subject":{"type":"user","identifier": userID},"operation":{"key":"read","target":"space"},"_links":{}}
data = data=json.dumps(dictionary)
try:
response = requests.post(url=url, headers=headers, data=data)
print(response.content)
except:
print("Could not add permissions to Space " + lines["key"])
There’s a great deal of information on the internet about managing Confluence Space permissions with scripts, and how there’s no REST endpoint for it, and how it’s basically impossible.
This is incorrect.
There’s also a lot of information about using the JSONRPC or XMLRPC APIs to accomplish this. These APIs are only available on Server/DC. In the Cloud they effectively don’t exist, so this is yet more misinformation.
So why all the confusion?
There’s a lot of outdated information out there that floats around and doesn’t disappear even after it stops being correct or relevant. This is one of the major struggles I had when I started learning how to write scripts to interact with Jira and Confluence. Much of the information used to be relevant, but five or six or ten years later it only serves to distract people looking for a solution. That’s one of the major reasons I started this blog in the first place.
Specific to this instance, another reason for confusion is that the documentation for the REST API does outline an endpoint for Confluence Space permission management, but it includes some very strict limitations that could easily be misinterpreted.
The limitation is this: the API endpoint cannot be used by apps. Any apps. Including ScriptRunner. The only way to make use of this API endpoint is to call it from an external script or tool. So in this case, neither Groovy nor ScriptRunner is applicable.
How you actually address this limitation is up to you. Below is a very simple Python script that sets permissions on a Confluence Cloud Space.
import requests
import json
headers = {
'Authorization': 'Basic <Base 64-encoded username and API Token>',
'Content-Type': 'application/json',
'Accept': 'application/json',
}
data = data=json.dumps({
"subject": {
"type": "user",
"identifier": "<user ID>"
},
"operation": {
"key": "read",
"target": "space"
},
"_links": {}
})
response = requests.post('https://<confluence URL>.atlassian.net/wiki/rest/api/space/<space key>/permission', headers=headers, data=data)
print(response.content)
The output looks like the blob below. As noted, the information is divided into three sections. Formatting is provided by the script:
Directory name: Confluence Internal Directory
confluence-administrators has 1 members. 1 of those members are active and 0 of those members are inactive confluence-users has 2 members. 1 of those members are active and 1 of those members are inactive
Group: confluence-administrators. Username: admin. User account is active. User is in directory: Confluence Internal Directory Group: confluence-users. Username: admin. User account is active. User is in directory: Confluence Internal Directory Group: confluence-users. Username: slaurin. User account is inactive. User is in directory: Confluence Internal Directory
And here is the code:
import com.atlassian.user.GroupManager
import com.atlassian.confluence.user.DisabledUserManager
import com.atlassian.crowd.manager.directory.DirectoryManager
def disUse = ComponentLocator.getComponent(DisabledUserManager)
def userAccessor = ComponentLocator.getComponent(UserAccessor)
def groupManager = ComponentLocator.getComponent(GroupManager)
def directoryManager = ComponentLocator.getComponent(DirectoryManager)
def activeUsers = 0
def inactiveUsers = 0
def groups = groupManager.getGroups()
def groupInfoStringBuffer = ["<h1>Group Info</h1>"]
def userInfoStringBuffer = ["<h1>User Info</h1>"]
def directoryInfoStringBuffer = ["<h1>Directory Info</h1>"]
directoryManager.findAllDirectories().each{directory ->
directoryInfoStringBuffer.add("Directory name: " + directory.getName() + "<br>")
}
groups.each{ group ->
//For each group in Confluence
activeUsers = 0
inactiveUsers = 0
//After the group has been selected, delcare that the count of active and inactive users is zero
groupManager.getMemberNames(group).each{ member ->
//Get each member of the group
def userObj = userAccessor.getUserByName(member)
def userDirectoryID = userObj.getProperties().values()[0].directoryId
def userDirectoryName = directoryManager.findDirectoryById(userDirectoryID)
//Declare a user object, using the name of the currently selected user
if (disUse.isDisabled(userObj) == true) {
inactiveUsers += 1
//If the user account is disabled, increase the count of disabled users by 1
} else if (disUse.isDisabled(userObj) == false) {
activeUsers += 1
//If the user account is not disabled, increase the count of active users by 1
}
def accountStatus = ""
if(disUse.isDisabled(userObj) == false){
accountStatus = "<span style='color:green'>active</span>"
}else if(disUse.isDisabled(userObj) == true){
accountStatus = "<span style='color:red'>inactive</span>"
}
userInfoStringBuffer.add("Group: <i>" + group.getName() + "</i>. Username: <i>" + member + "</i>. User account is " + accountStatus + ". User is in directory: <i>" + userDirectoryName.getName() + "</i><br>")
//Log the name of the group, the name of the user, and whether or not the user's account is disable(true) or active (false)
}
groupInfoStringBuffer.add(group.getName() + " has " + groupManager.getMemberNames(group).size() + " members. " + activeUsers.toString() + " of those members are <span style='color:green'>active</span> and " + inactiveUsers.toString() + " of those members are <span style='color:red'>inactive</span> <br>")
//Note the information pertaining to each group
}
return directoryInfoStringBuffer.toString().replace(",", "").replace("[", "").replace("]", "") + groupInfoStringBuffer.toString().replace(",", "").replace("[", "").replace("]", "") + userInfoStringBuffer.toString().replace(",", "").replace("[", "").replace("]", "")
//Return the values stored to the string buffer
The Jira Cloud Migration Assistant tool (JCMA) will only migrate some types of custom fields. The custom fields that it cannot migrate must be recreated on the Cloud side, or otherwise mitigated in some way.
I wrote a small tool that proactively identifies any custom field in a Jira instance that JCMA will not be able to migrate.
It’s important to be proactive, especially when it comes to migrations. Every unexpected error or issue results in lost time, and a delayed migration.
Here’s the code:
import com.atlassian.jira.component.ComponentAccessor
def migrateableFieldTypes = [
"datepicker",
"datetime",
"textfield",
"textarea",
"grouppicker",
"labels",
"multicheckboxes",
"radiobuttons",
"multigrouppicker",
"multiuserpicker",
"float",
"select",
"multiselect",
"cascadingselect",
"userpicker",
"url",
"project",
"version",
"multiversion"
]
def sb = []
def customFields = ComponentAccessor.getCustomFieldManager().getCustomFieldObjects()
//Get all of the custom fields in the system
customFields.each{ customField ->
//For each custom field
if(!migrateableFieldTypes.contains(customField.properties.genericValue.customfieldtypekey.split(":")[1])){
//If the custom field type does not match one of the items in the array of migrateable field types
sb.add(customField.getFieldName() + " | " + customField.properties.genericValue.customfieldtypekey.split(":")[1] + "<br>")
//Note that custom field
}
}
return sb.toString().replace(",", "")
When migrating data from Jira Server/DC to Jira Cloud, JCMA does not like tickets that have no assignee, or which have tickets with an assignee that has an inactive user status.
This script checks a list of issues, and replaces any that have a missing or inactive assignee.
The script comments should pretty well explain what’s happening with the script. By default it’s set up to check all of the issues in a given project. However, by commenting out one line and uncommenting another, a specific list of issues across any number of projects can be fed to the script.
import com.atlassian.jira.component.ComponentAccessor
def issueManager = ComponentAccessor.getIssueManager()
def issueService = ComponentAccessor.getIssueService()
def userManager = ComponentAccessor.getUserManager()
def projectManager = ComponentAccessor.getProjectManager()
final replacementUser = "<username>"
//Define the username that will be used to replace the current assignee
final projName = "<project name>"
//Define the project to be checked
def project = projectManager.getProjectObjByName(projName).id
//Declare a project ID object, using the name of the project
def issues = ComponentAccessor.issueManager.getIssueIdsForProject(project)
//Get all the issues in the project
//final issues = ["<issues>"]
//Uncomment this line and comment out the previous one to feed the script a specific list of issues
issues.each {
targetIssue ->
//For each issue in the project
def user = userManager.getUserByName(replacementUser)
//Declare a user object, using the name of the replacement user
def user2 = ComponentAccessor.getJiraAuthenticationContext().getLoggedInUser()
//Declare another user object, but this time it's the logged-in user running the script
def issue = issueManager.getIssueObject(targetIssue)
//Declare an issue object using the current issue ID
if ((issue.assignee == null) || (issue.assignee.isActive() == false)) {
//If the assignee of the current ticket is nobody, OR if the assignee has a status of inactive, assign the issue to the replacement user
def validateAssignResult = issueService.validateAssign(user2, issue.id, user.username)
//Authenticated user, issue, user to assign to the issue
//Validate the proposed change
issueService.assign(user, validateAssignResult)
//Commit the change
}
}
Please note: this solution was originally posted by Peter-Dave Sheehan on the Atlassian Forums. I’m just explaining how I use it.
Sometimes when I’m trying to solve a problem with Jira, the internal Java libraries just aren’t sufficient. They’re often not documented, or they’re opaque.
It’s often far easier to turn to the REST API to get work done, but that’s a little more tricky on Jira DC or Server than it is on Cloud. On Jira Cloud, a REST call could be as simple as:
def result = get("/rest/api/2/issue/<issue key>")
.header('Content-Type', 'application/json')
.asObject(Map)
result.body.fields.comment.comments.body.each{field->
return field
}
}
However this won’t work on Server/DC. Instead we need a REST framework upon which to build our script.
The script returns a JSON blob. With point notation, we can then easily access its individual attributes, and start working with the values therein.
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.config.properties.APKeys
import com.atlassian.sal.api.net.Response
import com.atlassian.sal.api.net.ResponseException
import com.atlassian.sal.api.net.ReturningResponseHandler
import com.atlassian.sal.api.net.TrustedRequest
import com.atlassian.sal.api.net.TrustedRequestFactory
import com.atlassian.sal.api.net.Request
import groovy.json.JsonBuilder
import groovy.json.JsonSlurper
import groovyx.net.http.ContentType
import groovyx.net.http.URIBuilder
import java.net.URL;
import java.nio.file.Path;
def currentUser = ComponentAccessor.jiraAuthenticationContext.loggedInUser
def baseUrl = ComponentAccessor.applicationProperties.getString(APKeys.JIRA_BASEURL)
def trustedRequestFactory = ComponentAccessor.getOSGiComponentInstanceOfType(TrustedRequestFactory)
//def payload = new JsonBuilder([payloadField: 'payload value']).toString()
def endPointPath = '/rest/api/2/issue/<issue key>'
def url = baseUrl + endPointPath
def request = trustedRequestFactory.createTrustedRequest(Request.MethodType.GET, url) as TrustedRequest
request.addTrustedTokenAuthentication(new URIBuilder(baseUrl).host, currentUser.name)
request.addHeader("Content-Type", ContentType.JSON.toString())
request.addHeader("X-Atlassian-Token", 'no-check')
//request.setRequestBody(payload)
def response = request.executeAndReturn(new ReturningResponseHandler<Response, Object>() {
Object handle(Response response) throws ResponseException {
if (response.statusCode != HttpURLConnection.HTTP_OK) {
log.error "Received an error while posting to the rest api. StatusCode=$response.statusCode. Response Body: $response.responseBodyAsString"
return null
} else {
def jsonResp = new JsonSlurper().parseText(response.responseBodyAsString)
log.info "REST API reports success: $jsonResp"
return jsonResp
}
}
})
One of the challenges that I encountered this week was the need to include Advanced Roadmaps plans in a Jira DC to Cloud migration. As you may be aware, JCMA gives you the option to either migrate ALL plans, or none of them. There is no facility for selectively adding plans. This is a problem because the client instance has 1200 Roadmaps plans, and trying to add that many plans to a migration causes JCMA to crash.
I set out this week to build the foundations of what I’m calling the Roadmaps Insight Tool. The first version was intended to simply list every Roadmaps plan in an instance, and list each of its data sources (project, board, or filter).
The resulting dataset is useful in a number of ways. First, it gives transparency to a part of Jira that is otherwise quite opaque.
Second, it indicates which data sources on each plan are invalid; typically this is because the referenced data source no longer exists. A Jira administrator wanting to do a cleanup of the Roadmaps Plans could easily base that cleanup on this information.
Third, in the case of this particular client it allows us to see which Roadmaps plans can be deleted from the system. This is only feasible because the client is migrating from a QA mirror instead of production, so any non-relevant data can be safely wiped away.
So I built a tool to return the information I needed.
The tool is built primarily using Jira’s internal libraries, rather than making REST calls to the API. Typically I would prefer to use the API directly, but making thousands of REST calls tends to choke a script.
One of the challenges in solving this problem is that the Advanced Roadmaps API is not documented at all. The most you’ll find is vague references to it on the Atlassian forums. However, with a bit of trial and error I found the endpoints I needed.
The tool first gathers a list of every Advanced Roadmaps Plan by sending a GET to /rest/jpo/1.0/programs/list. This list of plans is comprehensive, though it only contains a few details about each plan. We’re after the ID of the plans. These are numeric, with indexing starting at 1. However, the list can easily contain gaps, if a plan was created and then deleted. For that reason we cannot simply find the highest plan ID# and iterate up to that; we’d be making calls to plans that no longer exist, and this is inefficient. Instead we add each extant plan ID# to an array.
With an array of plan ID#s in hand, we iterate through the array and query /rest/jpo/1.0/plans/<id>. Querying each ID returns a JSON blob that contains information about the data sources of each plan. Here’s an example:{
"id": 3,
"title": "Business Plan",
"planningUnit": "Days",
"hierarchyLevelToDefaultEstimateMap": {},
"issueSources": [{
"id": 3,
"type": "Board",
"value": "4"
}, {
"id": 4,
"type": "Project",
"value": "10200"
}, {
"id": 5,
"type": "Filter",
"value": "10402"
}],
"nonWorkingDays": [],
"portfolioPlanVersion": 1,
"includeCompletedIssuesFor": 30,
"calculationConfiguration": {
"ignoreSprints": false,
"ignoreTeams": false,
"ignoreReleases": false
},
"issueInferredDateSelection": 1,
"rankAgainstStories": true,
"baselineStartField": {
"id": "baselineStart",
"type": "BuiltIn",
"key": "baselineStart"
},
"baselineEndField": {
"id": "baselineEnd",
"type": "BuiltIn",
"key": "baselineEnd"
},
"createdTimestamp": 1670964732
}
One obvious challenge about the data in its current form is that the data sources are referenced by their ID, rather than their name. This is inconvenient, so we’ll need to address that too. Please note that the examples below do not include the required imports, and are merely intended to illustrate the thinking.
Turning a project ID into a project name is the easiest of the three data sources to translate. We need to simply create a project object using the project ID that we already have, and then get its name from the list of attributes:
def project = projectManager.getProjectObj(<ID>)
return project.getName()
Turning a filter ID into a filter name isn’t much more complicated than turning a project ID into a project name. We use the search request manager to search for the filter object by its ID, and then return its name.
def filter = searchRequestManager.getSearchRequestById(<ID>)
return filter.name
Of the three data sources, turning a board ID into a board name is perhaps the most complicated. Even still, it’s relatively simple. We define the board ID, get the views of the current user, and search the views for that board ID. Naturally this only works if the current user has administrative access, and therefor has access to view all of the boards.
def boardID = <ID>
def allViews = rapidViewService.getRapidViews(currentUser).value
def boardObj = allViews?.find {it.id == boardID}
return boardObj.name
Okay so we have everything we need to query each Plan for each type of data source. But what if the data source doesn’t exist? Plans can easily reference data sources that have been deleted or recreated. For that reason, querying the data sources to turn an ID into a name needs to have error capturing, and a log statement within the Catch statement that indicates which Plan and data source caused the issue.
We have the plans and their sources, we have the ability to turn source IDs into names, and we’re capturing any errors that occur. With this in-hand, we have everything we need to gain some basic insight into Advanced Roadmaps.
import com.atlassian.greenhopper.model.rapid.BoardAdmin
import com.atlassian.greenhopper.service.rapid.view.BoardAdminService
import com.atlassian.greenhopper.service.rapid.view.RapidViewService
import com.atlassian.jira.component.ComponentAccessor
import com.onresolve.scriptrunner.runner.customisers.PluginModuleCompilationCustomiser
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.config.properties.APKeys
import com.atlassian.sal.api.net.Response
import com.atlassian.sal.api.net.ResponseException
import com.atlassian.sal.api.net.ReturningResponseHandler
import com.atlassian.sal.api.net.TrustedRequest
import com.atlassian.sal.api.net.TrustedRequestFactory
import com.atlassian.sal.api.net.Request
import groovy.json.JsonSlurper
import groovyx.net.http.ContentType
import groovyx.net.http.URIBuilder
import java.net.URL
import com.atlassian.jira.issue.search.*
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.component.ComponentAccessor
def searchRequestManager = ComponentAccessor.getComponent(SearchRequestManager)
def rapidViewService = PluginModuleCompilationCustomiser.getGreenHopperBean(RapidViewService)
def boardAdminService = PluginModuleCompilationCustomiser.getGreenHopperBean(BoardAdminService)
def projectManager = ComponentAccessor.getProjectManager()
def currentUser = ComponentAccessor.jiraAuthenticationContext.loggedInUser
def baseUrl = ComponentAccessor.applicationProperties.getString(APKeys.JIRA_BASEURL)
def trustedRequestFactory = ComponentAccessor.getOSGiComponentInstanceOfType(TrustedRequestFactory)
def sb = []
def idBuffer = []
//Define a string buffer to hold the results.
//We need to add them to a string buffer because logging cuts off at 300 lines
def listOfPlansAPI = '/rest/jpo/1.0/programs/list'
//This URL gives us a list of plans in the instance
def url = baseUrl + listOfPlansAPI
def numberOfProjects = 0
//***********Get the IDs of all the Advanced Roadmaps Plans************
def request = trustedRequestFactory.createTrustedRequest(Request.MethodType.GET, url) as TrustedRequest
request.addTrustedTokenAuthentication(new URIBuilder(baseUrl).host, currentUser.name)
request.addHeader("Content-Type", ContentType.JSON.toString())
request.addHeader("X-Atlassian-Token", 'no-check')
//Authenticate the current user
def response = request.executeAndReturn(new ReturningResponseHandler < Response, Object > () {
Object handle(Response response) throws ResponseException {
if (response.statusCode != HttpURLConnection.HTTP_OK) {
log.error "Received an error while posting to the rest api. StatusCode=$response.statusCode. Response Body: $response.responseBodyAsString"
return null
} else {
def jsonResp = new JsonSlurper().parseText(response.responseBodyAsString)
//log.info "REST API reports success: $jsonResp"
jsonResp.plans.id.each{planID ->
idBuffer.add(planID)
}
}
}
})
//***********End Function***********
//***********Get the Data Sources of Advanced Roadmaps Plans************
idBuffer.each {
//Process each plan ID that was stored in the array
planID ->
def singlePlanAPI = '/rest/jpo/1.0/plans/' + planID
//Plan details are accessed by hitting this API endpoint + the ID # of the plan
def planUrl = baseUrl + singlePlanAPI
def planRequest = trustedRequestFactory.createTrustedRequest(Request.MethodType.GET, planUrl) as TrustedRequest
planRequest.addTrustedTokenAuthentication(new URIBuilder(planUrl).host, currentUser.name)
planRequest.addHeader("Content-Type", ContentType.JSON.toString())
planRequest.addHeader("X-Atlassian-Token", 'no-check')
def planResponse = planRequest.executeAndReturn(new ReturningResponseHandler < Response, Object > () {
Object handle(Response planResponse) throws ResponseException {
if (planResponse.statusCode != HttpURLConnection.HTTP_OK) {
log.error "Received an error while posting to the rest api. StatusCode=$planResponse.statusCode. Response Body: $planResponse.responseBodyAsString"
return null
} else {
def jsonResp2 = new JsonSlurper().parseText(planResponse.responseBodyAsString)
//log.info "REST API reports success: $jsonResp"
jsonResp2.issueSources.each {source ->
//For each source of data in a given Plan
if (source.type.toString() == "Project") {
//If the source is a project, get the project name using the project ID
try {
def project = projectManager.getProjectObj(source.value.toInteger())
//Get the project attributes as an object
sb.add(jsonResp2.title + "(Plan ID#" + jsonResp2.id + "). " + " % " + "Project name: " + " % " + project.getName() + "<br>")
//Add the details to the string buffer
} catch (E) {
sb.add("Error: " + jsonResp2.title + "(Plan ID#" + jsonResp2.id + "). " + " " + "had an issue with project ID: " + ". Error: " + E.toString() + "<br>")
//If the script encounters an error, log it to the buffer
}
} else if (source.type.toString() == "Board") {
//If the source is a board, get the project name using the board ID
try {
//Use the board ID value to retrieve the board name
def boardID = source.value.toInteger()
def allViews = rapidViewService.getRapidViews(currentUser).value
def rapidView = allViews?.find {
it.id == boardID
}
sb.add(jsonResp2.title + "(Plan ID#" + jsonResp2.id + "). " + " % " + "Board name: " + " % " + rapidView.name + "<br>")
//Add the details to the string buffer
} catch (E) {
//If the script encounters an error, log it to the buffer
sb.add("Error: " + jsonResp2.title + "(Plan ID#" + jsonResp2.id + "). " + " " + "had an issue with board ID: " + source.value.toString() + ". Error: " + E.toString() + "<br>")
}
} else if (source.type.toString() == "Filter") {
//If the source is a filter, get the project name using the filter ID
try {
def filter = searchRequestManager.getSearchRequestById(source.value.toLong())
//Retrieve the filter object using the filter ID
sb.add(jsonResp2.title + "(Plan ID#" + jsonResp2.id + "). " + " % " + "Filter name: " + " % " + filter.name + "<br>")
//Add the details to the string buffer
} catch (E) {
sb.add("Error: " + jsonResp2.title + "(Plan ID#" + jsonResp2.id + "). " + " " + "had an issue with filter ID: " + source.value.toString() + ". Error: " + E.toString() + "<br>")
//If the script encounters an error, log it to the buffer }
}
}
}
}
})
}
//***********End Function***********
return sb
//Return the results
You can easily remove all permissions from a Confluence DC Space, or a Confluence Cloud Space. Confluence Server, though? You’re out of luck.
Imagine you migrated from Confluence Cloud to Confluence Server, and you wanted to remove all permissions on a Space (except for maybe “View Space”). That’s a whole lot of manually clicking, unless you script it. You’re going to need ScriptRunner for this.
The script below takes two inputs: a Space key, and a username. It needs the username of someone on the Space with Admin access, because Confluence will not let you remove EVERYONE with admin access from the Space.
Someone gets left behind.
Okay so it takes those two pieces of information as variables. It then makes use of two arrays. The first array is a prescribed selection of the permissions you’d like removed from the Space. Want to let everyone keep the View Space permission type? Take it out of the List! The second array is generated by the script. It’s a list of every username and group name with some kind of permission on the Space.
We then nest two loops, and iterate through the permission types and usernames. For each permission type, for each username, we call the method to remove that permission from that user on the given Space. The method is in a try/catch because not all users have all permissions, and the script knows to simply log the error and ignore the problem if that happens.
As is noted in the script, pre-generating the list of user and group names seems like an ugly way to do things. However if we simply try to call the “get username” method with every call to the permissions manager, it throws an error. This was the simplest way around that error.
On the subject of “simple”, we’re calling the SOAP service, which is unusual. However, this again is by FAR the easiest way to accomplish the task. Given that Confluence (and Jira) Server have a sunset date, it’s a sufficient fix for as long as the software will be around.
import com.atlassian.confluence.rpc.soap.services.SpacesSoapService
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.confluence.spaces.SpaceManager
def String spaceKey = "<Space Key"
//Define a target Space
def String adminName = "<Admin username>"
//We need to to leave SOMEBODY with permission on the Space, and that somebody has to have Admin permission
def space = ComponentLocator.getComponent(SpaceManager).getSpace(spaceKey)
def removeSpacePermission = ComponentLocator.getComponent(SpacesSoapService)
//Create an array with every kind of permission that a user or group could possibly have in Confluence
def permissions = [
"COMMENT",
"CREATEATTACHMENT",
"EDITBLOG",
"EDITSPACE",
"EXPORTSPACE",
"REMOVEATTACHMENT",
"REMOVEBLOG",
"REMOVECOMMENT",
"REMOVEOWNCONTENT",
"REMOVEPAGE",
"SETPAGEPERMISSIONS",
"VIEWSPACE",
"SETSPACEPERMISSIONS",
"REMOVEMAIL"
]
def usernames = []
space.getPermissions().each {
userName ->
usernames.add(userName.getUserSubject())
usernames.add(userName.getGroup())
} //Define an array that holds all the users and groups with any kind of permission on the given space
//NOTE: you might think that you could just reference the username method directly in a loop, but we quickly encounter a "ConcurrentModificationException" error
//We get around this by instead adding the user/group names to an array
permissions.each {
perms ->
usernames.each {
user ->
if (user.toString() == adminName) {
log.warn("Skipped removing Confluence Admin")
} else {
try {
removeSpacePermission.removePermissionFromSpace(perms.toString(), user.toString(), spaceKey)
//Remove the permission of a given type, from a given user, in the given space.
} catch (Exception) {
log.warn(Exception)
}
}
}
}
It’s possible to connect to a Jira instance using Python, and it’s possible to connect to AWS Comprehend using Python. Therefor, it is possible to marry the two, and use Python to assess the sentiment of Jira issues. There are two caveats when it comes to using this script:
The authentication method below is not mine. I have linked to the Stack Overflow page where I found it, in the script comments.
The script starts with three imports. We need the Jira library, logging, and the AWS library (boto3). You’ll likely need to a PIP install of Jira and boto3, if you’ve not used them before.
After the imports we’re defining client, which we use to interact with the AWS API. Remember to change your region to whichever region is appropriate for you, in addition to filling in the credentials required to connect to your AWS instance:
client = boto3.client(
service_name= 'comprehend',
region_name='us-west-2',
aws_access_key_id='<>',
aws_secret_access_key='<>',
)
#Define the client, with which we will connect to AWS
Next comes the Jira connection, which as noted is not my work. Nothing within the function needs to be adjusted. Instead we define the username, password, and Jira server to which we wish to connect. As well, we’re defining which Jira project to aim for with the jira_project variable:
jira_server = "<>"
jira_user = "<>"
jira_password = "<>"
jira_project = "<>"
#Define the attribtes of the Jira connection
We next define logging because it’s a requirement of the connect_jira method. After that, comes the definition for jc, which is the actual connection to the Jira server. Notice that it takes three Jira variables that we already defined, plus the log.
issues_in_proj
is a JQL query that returns all of the issues in a given project. Remember that we told it which project to search when we defined jira_project earlier. Notice that the resulting query contains double quotes within itself. This is necessary, and the system expects that the name of the project will be enclosed in these double quotes:
issues_in_proj = jc.search_issues('project="' + jira_project + '"', maxResults=100)
Finally, we iterate through all of the issues that were returned by the JQL query. For each of those issues, we use the contents of the Summary field as the input to the Comprehend detect_sentiment method.
Worth noting is that in order to get the contents of the summary field, we’ve asked the system to return the raw details of the issue. This is every aspect of the issue, returned as JSON, and we’ve drilled down to the detail we want within the JSON response:
#Iterate through the issues returned by the JQL search
#Feed the results to AWS Comprehend. In this case we're feeding it the contents of the Summary field
for issue in issues_in_proj:
response = client.detect_sentiment(
Text= str(issue.raw['fields']['summary']),
LanguageCode='en',
)# get the response
print("Isuse " + issue.key + " has a sentiment of " + response['Sentiment'] + "\n")
Comprehend returns its results as JSON, and we’ve simply selected the Sentiment attribute. Remember that these are case-sensitive.
{
"Sentiment": {
"Sentiment": "NEUTRAL",
"SentimentScore": {
"Positive": 0.0007247643661685288,
"Negative": 0.012237872928380966,
"Neutral": 0.9870284795761108,
"Mixed": 0.000008856321983330417
}
}
}
# -*- coding: utf-8 -*-
#Reference for the authentication method: https://stackoverflow.com/questions/14078351/basic-authentication-with-jira-python
#AWS Comprehend documentation: https://docs.aws.amazon.com/code-samples/latest/catalog/code-catalog-python-example_code-comprehend.html
from jira.client import JIRA
import logging
import boto3
client = boto3.client(
service_name= 'comprehend',
region_name='us-west-2',
aws_access_key_id='<>',
aws_secret_access_key='<>',
)
#Define the client, with which we will connect to AWS
#Define the Jira connection function
def connect_jira(log, jira_server, jira_user, jira_password):
'''
Connect to JIRA. Return None on error
'''
try:
log.info("Connecting to JIRA: %s" % jira_server)
jira_options = {'server': jira_server}
jira = JIRA(options=jira_options, basic_auth=(jira_user, jira_password))
# ^--- Note the tuple
return jira
except Exception:
log.error("Failed to connect to JIRA" )
return None
jira_server = "<>"
jira_user = "<>"
jira_password = "<>"
jira_project = "<>"
#Define the attribtes of the Jira connection
#Logging is a required attribute of the connect_jira method below
log = logging.getLogger(__name__)
#Connect to Jira
jc = connect_jira(log, jira_server, jira_user, jira_password)
#Update this JQL to search for whatever makes sense
issues_in_proj = jc.search_issues('project="' + jira_project + '"', maxResults=100)
#Iterate through the issues returned by the JQL search
#Feed the results to AWS Comprehend. In this case we're feeding it the contents of the Summary field
for issue in issues_in_proj:
response = client.detect_sentiment(
Text= str(issue.raw['fields']['summary']),
LanguageCode='en',
)# get the response
print("Isuse " + issue.key + " has a sentiment of " + response['Sentiment'] + "\n")
As part of my grad school course work, I had half a dozen XML files with content that needed to be analyzed for sentiment. AWS Comprehend is a service that analyzes text in a number of ways, and one of those is sentiment analysis.
My options were to either cut and paste the content of 400 comments from these XML files, or come up with a programmatic solution. Naturally, I chose the latter.The XML file is formatted like so:
<posts>
<post id="123456">
<parent>0</parent>
<userid>user id</userid>
<created>timestamp</created>
<modified>timestamp</modified>
<mailed>1</mailed>
<subject>Post title</subject>
<message> Message content </message>
What I needed to get at was the message element of each post, as well as the post id.
The script imports BeautifulSoup to work with the XML, and boto3, to work with AWS. We next define a string buffer, because we need to store the results of the analysis somehow.
Next we define the client, which tells AWS everything it needs to know. Tell it the service you’re after, the AWS region, and the tokens you’d use to authenticate against AWS.After that we provide a list of XML files that the script needs to parse, and tell it to loop through and read each one.
We next tell BeautifulSoup to find all of the elements with a “post” type. This saves us having to drill down through the entire hierarchy of the XML file.
Armed with an array of all of the posts in the current XML file, we loop through that array. We first examine the length of the message (content) of the current post. If it exceeds 4999 bytes, don’t send it to the API. The Comprehend API has a 5000 byte limit.
If the current post’s message length is less than 5000 bytes, we come to the point where we actually send it to the Comprehend service. We define the response object as being a set of attributes sent to the detect_sentiment method of the client. This is the line that tells Comprehend specifically what you want it to do with the text you’re sending.
Comprehend should send back the response, from which we’ll extract the relevant attributes. Below is the format of the JSON that AWS sends back as the response:
{
"Sentiment": {
"Sentiment": "NEUTRAL",
"SentimentScore": {
"Positive": 0.0007247643661685288,
"Negative": 0.012237872928380966,
"Neutral": 0.9870284795761108,
"Mixed": 0.000008856321983330417
}
}
}
If I wanted to simply access the sentiment, I would call response['Sentiment']
. In order to retrieve any aspect of the sentiment score, I would call response['SentimentScore']['Positive']
, or whatever type of score you’re after. The script below returns all attributes of the JSON response, and stores them in the string buffer.
The final step in the script is to print the string buffer that is now holding all of our responses. Here’s the full script:
from bs4 import BeautifulSoup
import boto3
stringBuffer = []
#Provide a string buffer to store the results in
client = boto3.client(
service_name= 'comprehend',
region_name='us-west-2',
aws_access_key_id='<>',
aws_secret_access_key='<>',
)
#Define the client, with which we will connect to AWS
files = ['file1.xml','file2.xml',]
#Provide a list of files to loop through
for file in files:
with open(file, 'r', encoding="utf8") as f:
data = f.read()
# Passing the stored data inside the beautifulsoup parser
xmlData = BeautifulSoup(data, 'xml')
#Retreive the XML
xmlPosts = xmlData.find_all('post')
#Find all instances of a "post" element in the XML
for post in xmlPosts:
if(len(str(post.message)) > 4999):
stringBuffer.append(str(post['id']) +"^"+ "was too big")
else:
response = client.detect_sentiment(
Text= str(post.message),
LanguageCode='en',
)# get the response
stringBuffer.append(str(post['id']) + "^" + str(response['Sentiment']) + "^" + str(response['SentimentScore']['Positive'])+ "^" + str(response['SentimentScore']['Negative']) + "^" + str(response['SentimentScore']['Neutral']) + "^" + str(response['SentimentScore']['Mixed']) )
for line in stringBuffer:
print(line)
I found myself needing to examine issues that came through our Jira Service Desk workflow, to determine if they had a form attached. If they didn’t have a form attached, i.e. someone created the issue manually instead of through the Service Desk Portal, the workflow would be instructed to handle them in a certain way.
This turned out to be surprisingly difficult. There’s no easily accessed attribute associated with issues in Jira that indicates whether or not they have a form attached.
In the end, I determined that it was possible to examine issues in this way by applying some JQL to them.
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.jql.parser.JqlQueryParser
import com.atlassian.jira.web.bean.PagerFilter
import com.atlassian.jira.bc.issue.search.SearchService
def jqlQueryParser = ComponentAccessor.getComponent(JqlQueryParser)
def issueManager = ComponentAccessor.getIssueManager()
def searchService = ComponentLocator.getComponent(SearchService)
def user = ComponentAccessor.getJiraAuthenticationContext().getLoggedInUser()
def issue = issueManager.getIssueObject("<issue key>")
//Define an issue against which the script will be run
def queryParser = ComponentAccessor.getComponent(JqlQueryParser)
def query = queryParser.parseQuery('issueFormsVersion > 1 and key = '+ issue.key)
//Define the query parameters
def search = searchService.search(user, query, PagerFilter.getUnlimitedFilter())
//This gives us a list of issues that match the query
if(search.results.size() > 0){
//Do something here
//If any results were found, it means that the issue had a form attached
}
The above script is pretty simple. The JQL attribute issueFormsVersion allows us to detect if an issue has a form with any version associated with it. It currently runs against a single predefined issue key, but this could easily be expanded.
You could approach this in any number of ways. This script basically asks the system “is there an issue with the current issue’s key, and does it have a form with any version number?” If the answer to both is yes, then our current issue must have a form attached to it.
I found myself with an interesting Jira problem today. We had a dashboard that wasn’t showing all issues relevant to the JQL query in the filter. The issues were fully present in the system, but would only appear if we went in after creation and tweaked them in some way. Essentially we had issues that weren’t being picked up by the filter because the system didn’t see them as “complete”.
Here’s the sanitized JQL:
project = <project> AND status = Queued AND "<Custom Field>" = "<value>"
It was picking up some of the issues, remember. And the issues were all created in the same way: they came in through the Service Desk as Service Requests. So the JQL wasn’t the issue.
I already had a listener running as part of the process, so I tried adding an issueUpdate statement to it:
issueManager.updateIssue(user, issue, EventDispatchOption.ISSUE_UPDATED, false)
This did not resolve the issue. I next tried updating the issue summary to be itself as part of the Workflow Transition process:
issue.setSummary(issue.summary + " - " + issue.reporter.displayName)
This also did not resolve the issue.
In the end I solved the issue by introducing a re-index to a Listener that was listening for Issue Updates:
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.index.IssueIndexingService
ComponentAccessor.getComponent(IssueIndexingService).reIndex(issue)
By triggering a re-index of the issue, the system was enticed to view the issue as complete, and the dashboard displayed it. Please note that this is different than a re-index of the entire system.
It may be the case that if the issue sat long enough, the reindex would happen automatically. However, it may often be the case that an organization does not have the time to sit and wait for issues to sort themselves out.
At the end of the week I had the script working according to the specifications. We tested it on Friday afternoon, and all was well. On Monday morning the project manager sent me an email, notifying me that the script now had to work a completely different way. It also had to be finished by the end of that day, as Tuesday was the day they were using the script as part of a training session.
The good news was that for the most part, the changes I had to make to the script were reductive. It had to do fewer things. However, I had initially included the Custom Field update as part of the transition:
def issueInputParameters = issueService.newIssueInputParameters()
issueInputParameters.addCustomFieldValue(customFieldID, customFieldValue)
//The request is for a new workstation, so set Next to Act to Tier 1 - IDIR Services
issueInputParameters.setSummary(issue.summary + " - " + issue.getReporterUser().displayName)
//Set the summary to the current summary plus the displayname of the reporter/requestor
Above you can see that the update to the custom field was added as a transition parameter. The new script did away with the transitions, so I had to update the custom field in a different way.
Ordinarily the process to update a custom field would be something like this:
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.event.type.EventDispatchOption
import com.atlassian.jira.issue.MutableIssue
def issueManager = ComponentAccessor.getIssueManager()
MutableIssue issue = issueManager.getIssueByCurrentKey("<key>")
def user = issue.getReporter()
def customFieldManager = ComponentAccessor.getCustomFieldManager()
def customField = customFieldManager.getCustomFieldObject(123456)
issue.setCustomFieldValue(customField, "New Value")
issueManager.updateIssue(user, issue, EventDispatchOption.DO_NOT_DISPATCH, false)
However, this script ran as a Listener, so the process is different. I found an example from the Adaptavist Script Library that pretty clearly seemed to meet my requirements:
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.ModifiedValue
import com.atlassian.jira.issue.util.DefaultIssueChangeHolder
log.warn(issue.key)
// the name of the custom field to update
final customFieldName = '<customField>'
// the new value of the field
final newValue = '<value>'
def customFieldManager = ComponentAccessor.customFieldManager
def issue = event.issue
def customField = customFieldManager.getCustomFieldObjects(issue).findByName(customFieldName)
assert customField : "Could not find custom field with name $customFieldName"
customField.updateValue(null, issue, new ModifiedValue(issue.getCustomFieldValue(customField), newValue), new DefaultIssueChangeHolder())
Notable is the lack of an updateIssue statement, as would be required in a standard script. This isn’t an oversight: the statement isn’t required in this case.
issueManager.updateIssue(user, issue, EventDispatchOption.DO_NOT_DISPATCH, false)
//Not required for a Listener
I added the script to a Listener and tested it out.
Nothing happened.
The logs noted the key of the issue that I created, but the custom field wasn’t updated. That was pretty strange, usually the Script Library examples are pretty solid. I double-checked that I had set the right Final values. All appeared to be well. I tried a few tweaks and a few other methods from the internet, but none seemed to help. Eventually I went back to the original code snippet. I put the call to the updateValue method into a try/catch statement and logged the Exception. The logs held an error for me:
java.lang.ClassCastException: class java.lang.String cannot be cast to class com.atlassian.jira.issue.customfields.option.Option
Now I was getting somewhere!
From the error message I could see that the method didn’t like my attempt to cast a string to the Option class. It was only then that I remembered that the custom field in question was a dropdown/single-select, not a freeform text field. In the case of dropdown fields, you need to feed it an Option, rather than a simple string or integer.
I found a solution that worked for me on the Atlassian customer forums. Here’s a stripped down example based on that link:
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.ModifiedValue
import com.atlassian.jira.issue.util.DefaultIssueChangeHolder
// the name of the custom field (single select list type)
final customFieldName = "<custom field name>"
// the value of the new option to set
final newValue = "<known value in the list of Options for the custom field>"
// the issue key to update
final issueKey = "<issue key>"
def issue = ComponentAccessor.issueManager.getIssueByCurrentKey(issueKey)
assert issue: "Could not find issue with key $issueKey"
//Get the target issue as an Issue object
def customField = ComponentAccessor.customFieldManager.getCustomFieldObjectsByName(customFieldName)
assert customField: "Could not find custom field with name $customFieldName"
//Get the custom field as a Custom Field object
def availableOptions = ComponentAccessor.optionsManager.getOptions(customField.first().getRelevantConfig(issue))
//Return all of the valid options for the Custom Field
def optionToSet = availableOptions.find { it.value == newValue }
assert optionToSet: "Could not find option with value $newValue. Available options are ${availableOptions*.value.join(",")}"
//Define an Option object by finding it in the list of options we've already pulled from the custom field
customField.updateValue(null, issue, new ModifiedValue(issue.getCustomFieldValue(customField), optionToSet), new DefaultIssueChangeHolder())
//Update the custom field with the new value
I have series of ProForma forms that are submitted as issues to the Jira Service Desk.
I needed to run some scripts against these issues after they were created and fast-tracked into a Queued state. I elected to run a Groovy script as a Listener on Issue Update. The thinking was that because the state of the ticket was being updated from Open to Queued, the Listener would have plenty of material to work with.
The issue that I encountered was that the listener was only detecting some of the issues. I enabled logging at the outset of the Listener script and told it to record any ticket that was subject to an Issue Updated event. Some of the issues created by the forms weren’t being detected by this at all. At the same time, some of them were.
The issue was consistent, in that certain forms were never detected by the listener, and certain forms were always detected. There was no appreciable difference in the form setup or the way the tickets were processed. The Listener script itself simply examined the Request Type field of each ticket (which is a Jira Core field), and routed the ticket based on that value. All of the forms had that field.
The answer was that only forms with linked custom fields were being detected. In other words, only the populating of the linked custom field with the value of the field on the form was being picked up by the Listener. If the form did not have a custom field, or if that custom field was not populated with a value, the Listener didn’t consider that issue to have ever been updated. This, despite the fact that every one of the issues had been subject to a Status change.
The solution was simply to turn one of the fields on each form into a mandatory custom field. It didn’t even need to show up on the Issue screen, it simply needed to be populated with a value as part of the Issue creation process.
As is so often the case, the easy part was solving the problem. The hard part was nailing down what the problem actually was!
During the process of writing a listener for Jira, I found myself encountering a strange error. The error looked like this:
com.atlassian.servicedesk.api.AuthorizationException: The action performed required a logged in user. Please log in and try again.
This was strange for two reasons. First, I am a Jira administrator with total access to the entire instance. Second, my script had explicitly supplied the logged-in user (myself) as the account under which to run the script.
What gives?
The code I used to supply my own account to the script looks like so:
def user = ComponentAccessor.getJiraAuthenticationContext().getLoggedInUser()
This solution had been more than sufficient previously. I searched and I searched and I couldn’t find anything related to my issue. If you search the text of the error, three results are returned, and they’re about REST API permissions. Not applicable here.
In the end I accidentally stumbled on the answer, by trying different solutions for the action I was trying to take. What I was trying to do was get the Request Type of the issue in question.
The solution was to explicitly provide an account to the script, under which the method could run. That is, I changed the code to:
def user = userManager.getUserByName("ken_mcclean")
It’s a small change, but it made all the difference. For whatever reason, querying the Request Type requires that your code be this specific.
After making this change, I stopped getting permissions errors, and proceeded to set up the rest of the listener. I’m going to leave the full text of the error message below, in case anyone is searching for a solution to this issue in the future:
2022-05-07 08:18:28,609 ERROR [runner.AbstractScriptListener]: *************************************************************************************
2022-05-07 08:18:28,610 ERROR [runner.AbstractScriptListener]: Script function failed on event: com.atlassian.jira.event.issue.IssueEvent, file: null
com.atlassian.servicedesk.api.AuthorizationException: The action performed required a logged in user. Please log in and try again.
at com.atlassian.servicedesk.internal.api.util.EitherExceptionUtils.httpStatusCodeToException(EitherExceptionUtils.java:89)
at com.atlassian.servicedesk.internal.api.util.EitherExceptionUtils.lambda$anErrorEitherToException$0(EitherExceptionUtils.java:36)
at io.atlassian.fugue.Either$Left.fold(Either.java:586)
at com.atlassian.servicedesk.internal.api.util.EitherExceptionUtils.anErrorEitherToException(EitherExceptionUtils.java:32)
at com.atlassian.servicedesk.internal.feature.customer.request.requesttype.RequestTypeServiceImpl.getRequestTypes(RequestTypeServiceImpl.java:44)
at com.atlassian.servicedesk.api.requesttype.RequestTypeService$getRequestTypes$0.call(Unknown Source)
at Script6663.run(Script6663.groovy:125)
I found myself in a situation wherein a ProForma form on the Jira Service Desk contained a custom field. I needed the contents of that field to dictate which team the resulting issue was assigned to.
The first thing I tried was to add a Groovy script to a transition in the Service Desk Workflow. If I could tell the script to transition the issue based on the value of a field, the issue should be resolved.
This turned out to not be possible. For some reason, the value of a custom field is not populated from the form to the field until after the ticket “lands”, or finishes moving through the workflow. It’s not enough to simply trigger the script after the Open/Create transition. No matter what, the value of the custom field is not available until after all of the initial transitions have finished. No matter how you try to reference the custom field, the value returned will always be NULL.
This was extremely aggravating, especially when it seems like such a simple solution SHOULD work, and it’s not clear whether the issue lies with your script or with the system. The solution you’re going to come across most often is to add a sleep() timer to your code. This does not solve the issue.
In the end, I came up with two solutions. One of them was pretty ugly, and the other was a lot cleaner.
The first was to simply run a scheduled job that looked for tickets matching certain criteria, and have that update the ticket in the ways that I needed. This is less than ideal; it presents a pretty heavy load on the system, and runs mindlessly.
The better solution was to set up a listener, to watch for ticket updates in the Service Desk project. Rather that running constantly, listeners listen for specific events or actions happening in the system. This makes it a much more selective process; not only does it not run constantly, but it only happens when certain events or triggers occur. Setting up a listener presented its own unique challenges, but in the end it was the solution that I was looking for. One example of a challenge is that for some reason, I couldn’t get it to update the summary of the ticket. Rather, I could get it to update the summary, but it would then refuse to process the rest of the script. This frustrated me to no end, because it was such a simple thing that wasn’t working. It worked just fine in the ScriptRunner script console. Something about the listener was different.As a workaround, I added a summary update to a Fast Track postfunction. This worked, but I didn’t want to leave it at that.
This morning I realized that I was missing the forest for the trees. I already had the functionality in place to add Issue Update Parameters during issue transitions. I simply needed to include an additional parameter. So it went from this:
def issueInputParameters = issueService.newIssueInputParameters()
issueInputParameters.addCustomFieldValue(12210, "13702")
def transitionValidationResult = issueService.validateTransition(user, issue.id, actionId, issueInputParameters,transitionOptions)
if (transitionValidationResult.isValid()) {
def transitionResult = issueService.transition(user, transitionValidationResult)
To this:
def issueInputParameters = issueService.newIssueInputParameters()
issueInputParameters.addCustomFieldValue(12210, "13702")
issueInputParameters.setSummary("New Summary")
def transitionValidationResult = issueService.validateTransition(user, issue.id, actionId, issueInputParameters,transitionOptions)
if (transitionValidationResult.isValid()) {
def transitionResult = issueService.transition(user, transitionValidationResult)
In order to fix the issue, I needed to walk away from it. It’s so easy to get tunnel vision, and to get stuck on a certain solution or idea that you’re certain is going to work eventually.
Sometimes taking a break is the most productive thing you can do for yourself.
Jira Issues can only be transition between states in a manner that resembles the Workflow of the parent project. In other words, before you begin trying to script the transition of a Jira issue, you must understand the workflow and what the available transitions actually are.
Further to that point, there’s more to transitions that simply changing the status of an issue. Some transitions have criteria. For example, you may want to move an issue from Open to Pending. In order to do so, you may need to select a Pending State, and add a comment. How do you account for that in a script?
Using Groovy and ScriptRunner to transition Jira issues is a pretty straightforward process. So too is this process quite simple if you need to include transition criteria. It’s simple, if you can find a guide on how to do it. As with most of the things I blog about, I couldn’t find instructions for accomplishing this simple task, so I’m writing my own.
The script we’re going to explore transitions a Jira issue from one state to another, and fills in the criteria required to make the transition a valid one. As part of the transition process, it checks to ensure that the transition is valid. The majority of this code isn’t actually mine, it’s a modification of an Adaptavist script. But the script was incredibly difficult to find, and instructions were unclear, so I felt there was still merit in writing this blog post.
In order to make this script work, you’ll need a few things:– A test issue
– Knowledge of the workflow, so that you know what is and is not a valid transition– The transition ID (more about that in a moment)
As noted above, you’ll need two things from the project’s Workflow for this script. You’ll need knowledge of the transitions, and you’ll need the Transition ID.
You’ll need to be a Jira Administrator to edit the Workflow. Open the Workflow in question for editing, and choose the Text view. What we’re presented with is a chart view of the Workflow. The information you’ll need next depends on where your issue is going to start. For example, you may have a section of the chart with a Linked Status called Open. A ticket with a status of Open can be transitioned to any of the statuses under the Transition (id) heading. Ultimately what we’re after is the transition ID of the target status.
In my case, the issue originated with a status of Open, and I wanted to transition it to Pending. In my system, Pending has an ID of 501, and 501 is the piece of data we need to note.Armed with the transition ID of the target status, we can now assemble the script. The entire script is below. As you can see, it’s not terribly complicated:
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.Issue
import com.atlassian.servicedesk.api.requesttype.RequestTypeService
import com.onresolve.scriptrunner.runner.customisers.WithPlugin
import com.atlassian.jira.issue.CustomFieldManager
def customFieldManager = ComponentAccessor.customFieldManager
def issueManager = ComponentAccessor.issueManager
def issue = issueManager.getIssueByCurrentKey("<>")
//Fill in an issue key here
def currentUser = ComponentAccessor.jiraAuthenticationContext.loggedInUser
def issueService = ComponentAccessor.issueService
def workflowManager = ComponentAccessor.workflowManager
def workflowActionId = 501
//Provide the transition ID of the target status for the issue
def issueInputParameters = issueService.newIssueInputParameters()
//Define a new input parameter object
issueInputParameters.setSkipScreenCheck(true)
//Skip the screen checks during transitions
issueInputParameters.setComment("This is a required comment")
issueInputParameters.addCustomFieldValue(<custom field ID>, "<value>")
//Provide a custom field ID, and a value to add to that field
def transitionValidationResult = issueService.validateTransition(currentUser, issue.id, workflowActionId, issueInputParameters)
//Create a transition object by giving it a user ID, an issue ID, the workflow action ID, and the set of parameters that we defined earlier
assert transitionValidationResult.valid: transitionValidationResult.errorCollection
def transitionResult = issueService.transition(currentUser, transitionValidationResult)
assert transitionResult.valid: transitionResult.errorCollection
//Check that the transition is valid
In the context of this simple example, there are only a few things you’d need to change to make this work for yourself. You’ll need to fill in a proper issue key and transition ID. In the given example you’d also need to provide a custom field ID and a value, but that’s not strictly necessary.
Let’s examine this a little more deeply.
Most of the setup of the script should be familiar if you’ve done any work with Groovy and Jira. It’s only when we hit workflowActionID" that things might start to become unfamiliar.
setSkipScreenCheck is a method that does pretty much what it says. In the words of Atlassian themselves: “By default, the Issue Service only allows setting values to fields which appear on the screen of the operation which you are performing (e.g. the Edit screen). By enabling this flag, the Issue Service will skip this check.”
Please do note that this method does not allow you to skip any of the parameters required for completing this transition. For example, if I ran the above script without declaring a comment as a transition parameter, I’d get this error message: Error Messages: [Comment: Please provide a comment for this transition].
We next come to the meat of the script. This is the point at which you tell the script which fields to fill in or actions to take, as part of the transition. If you simply updated the value of a required custom field, the script wouldn’t have any idea that it was part of the transition and the transition check would fail. For example, in my case the transition from Open to Pending requires that I include a comment about why the ticket is being set to Pending, and it also requires that I update a particular custom field with a value.
You can add as many actions or parameters as you see fit, at this time. Here’s the documentation for the method from Atlassian.
Having had declared as many parameters as you’d like, your customizing of the script is done. The rest of the script defines a transition and uses the collection of parameters that we defined as one of its arguements. The script then checks that the transition is valid, and if so completes the transition.
It is entirely possible to set up Jira so that a subtask may remain open, while the parent task is closed. This effectively creates orphan subtasks, not connected to any open issue or ticket.
Identifying these is a matter of first identifying all subtasks, and then checking the status of both the subtask and its parent.
We first identify all subtasks for a given project by invoking a service context, and running some JQL against the Jira instance:
import com.atlassian.jira.bc.filter.SearchRequestService
import com.atlassian.jira.issue.search.SearchRequest
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.jira.bc.JiraServiceContext
import com.atlassian.jira.bc.JiraServiceContextImpl
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.search.SearchProvider
import com.atlassian.jira.jql.parser.JqlQueryParser
import com.atlassian.jira.web.bean.PagerFilter
import com.atlassian.jira.bc.issue.search.SearchService
import com.atlassian.jira.issue.label.LabelManager
def jqlQueryParser = ComponentAccessor.getComponent(JqlQueryParser)
def searchProvider = ComponentAccessor.getComponent(SearchProvider)
def issueManager = ComponentAccessor.getIssueManager()
def searchService = ComponentLocator.getComponent(SearchService)
def user = ComponentAccessor.getJiraAuthenticationContext().getLoggedInUser()
def searchManager = ComponentLocator.getComponent(SearchRequestService)
def contextManager = ComponentLocator.getComponent(JiraServiceContext)
def searchRequest = ComponentLocator.getComponent(SearchRequest)
def labelManager = ComponentLocator.getComponent(LabelManager)
JiraServiceContextImpl serviceCtx = new JiraServiceContextImpl(user);
//Declare a search context using the logged-in user
def queryParser = ComponentAccessor.getComponent(JqlQueryParser)
//Declare a parser to handle the JQL query
def query = queryParser.parseQuery('project = "<project name>" ')
//Define the JQL query. In this instance we're returning all issues under a given project
def search = searchService.search(user, query, PagerFilter.getUnlimitedFilter())
//Define a search, using all the pieces defined so far
Naturally you would fill in “project name” with the name of the target project. This service context allows us to define a search service. The search service takes a user and a query as input. In the context above, we’ve defined “user” as whomever is logged into the system and is running the script.
So we’re able to run a search. Now what?
By invoking the results of the search, we’re able to iterate through the list:
search.results.each {
//Iterate over the results
retrievedIssue ->
//Do something with the results
}
The next step is to identify any issue that is a subtask. There are a number of ways that this could be accomplished. One of the ways is to simply check if the issue has a parent task. If it has a parent, it must therefor be a subtask! We actually start by first handling anything that is not a subtask, as the script will otherwise throw a null error. In effect, we’ve told the script to do something with the issue so long as it has a parent task.
if (retrievedIssue.getParentObject() == null) {
//We determine if an issue is a subtask by testing for a parent object
} else {
Next we need to identify any issue with a parent object with a status of “closed”, but which itself is not “closed”:
if (retrievedIssue.getParentObject().getStatus().name == "Closed") {
//If the parent object's status is closed
if (retrievedIssue.getStatus().name != "Closed") {
//And if the subtask/child issue's status is NOT closed
The result would be logged or added to a string buffer. Finally, we put it all together:
import com.atlassian.jira.bc.filter.SearchRequestService
import com.atlassian.jira.issue.search.SearchRequest
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.jira.bc.JiraServiceContext
import com.atlassian.jira.bc.JiraServiceContextImpl
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.search.SearchProvider
import com.atlassian.jira.jql.parser.JqlQueryParser
import com.atlassian.jira.web.bean.PagerFilter
import com.atlassian.jira.bc.issue.search.SearchService
import com.atlassian.jira.issue.label.LabelManager
def jqlQueryParser = ComponentAccessor.getComponent(JqlQueryParser)
def searchProvider = ComponentAccessor.getComponent(SearchProvider)
def issueManager = ComponentAccessor.getIssueManager()
def searchService = ComponentLocator.getComponent(SearchService)
def user = ComponentAccessor.getJiraAuthenticationContext().getLoggedInUser()
def searchManager = ComponentLocator.getComponent(SearchRequestService)
def contextManager = ComponentLocator.getComponent(JiraServiceContext)
def searchRequest = ComponentLocator.getComponent(SearchRequest)
def labelManager = ComponentLocator.getComponent(LabelManager)
JiraServiceContextImpl serviceCtx = new JiraServiceContextImpl(user);
//Declare a search context using the logged-in user
def queryParser = ComponentAccessor.getComponent(JqlQueryParser)
//Declare a parser to handle the JQL query
def query = queryParser.parseQuery('project = "WFCST" ')
//Define the JQL query. In this instance we're returning all issues under a given project
def search = searchService.search(user, query, PagerFilter.getUnlimitedFilter())
//Define a search, using all the pieces defined so far
search.results.each {
//Iterate over the results
retrievedIssue ->
if (retrievedIssue.getParentObject() == null) {
//We determine if an issue is a subtask by testing for a parent object
} else {
if (retrievedIssue.getParentObject().getStatus().name == "Closed") {
//If the parent object's status is closed
if (retrievedIssue.getStatus().name != "Closed") {
//And if the subtask/child issue's status is NOT closed
log.warn("This subtask is open, but has a closed parent: " + retrievedIssue.getKey())
//If the parent is closed but the child is not closed, we must have an orphan, and that should be logged
}
}
}
}
This script limits the search query to a single project. It’s quite trivial to extend this script to parse ALL projects in a Jira instance:
import com.atlassian.jira.bc.filter.SearchRequestService
import com.atlassian.jira.issue.search.SearchRequest
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.jira.bc.JiraServiceContext
import com.atlassian.jira.bc.JiraServiceContextImpl
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.search.SearchProvider
import com.atlassian.jira.jql.parser.JqlQueryParser
import com.atlassian.jira.web.bean.PagerFilter
import com.atlassian.jira.bc.issue.search.SearchService
import com.atlassian.jira.issue.label.LabelManager
def jqlQueryParser = ComponentAccessor.getComponent(JqlQueryParser)
def searchProvider = ComponentAccessor.getComponent(SearchProvider)
def issueManager = ComponentAccessor.getIssueManager()
def searchService = ComponentLocator.getComponent(SearchService)
def user = ComponentAccessor.getJiraAuthenticationContext().getLoggedInUser()
def searchManager = ComponentLocator.getComponent(SearchRequestService)
def contextManager = ComponentLocator.getComponent(JiraServiceContext)
def searchRequest = ComponentLocator.getComponent(SearchRequest)
def prList = ComponentAccessor.getProjectManager().getProjectObjects().key
def sb = []
//Define a string buffer to hold the results
JiraServiceContextImpl serviceCtx = new JiraServiceContextImpl(user);
//Declare a search context using the logged-in user
def queryParser = ComponentAccessor.getComponent(JqlQueryParser)
//Declare a parser to handle the JQL query
prList.each {
projectName ->
def query = queryParser.parseQuery('project = "' + projectName + '" ')
//Define the JQL query. In this instance we're feeding the name of each project into the JQL on each iteration of the loop
def search = searchService.search(user, query, PagerFilter.getUnlimitedFilter())
//Define a search, using all the pieces defined so far
search.results.each {
//Iterate over the results
retrievedIssue ->
if (retrievedIssue.getParentObject() == null) {
//We determine if an issue is a subtask by testing for a parent object
} else {
if (retrievedIssue.getParentObject().getStatus().name == "Closed") {
//If the parent object's status is closed
if (retrievedIssue.getStatus().name != "Closed") {
//And if the subtask/child issue's status is NOT closed
sb.add("This subtask is open, but has a closed parent: " + retrievedIssue.getKey() + "</br>")
//If the parent is closed but the child is not closed, we must have an orphan, and that should be logged
}
}
}
}
}
return sb
Notice that we’ve defined prList as a list of every project key in Jira. We then loop through the list, and feed each key into the JQL.
This is part three of my series on using Python to connect to the Twitter API.
Imagine for a moment that you had a specific vision for your Twitter account. A vision of balance, and harmony. What if you only followed people who also followed you? Whether or not you want to curate your Twitter experience in this transactional way is entirely up to you. It’s your account!
We can do that with Python. As always, replace the placeholders with your own account credentials. See Part One of this series if you’re not sure how to do that.
Let’s take a look at the code required to do this:
import tweepy
consumerKey = "<>"
consumerSecret = "<>"
accessToken = "<>"
accessTokenSecret = "<>"
auth = tweepy.OAuthHandler(consumerKey, consumerSecret)
auth.set_access_token(accessToken, accessTokenSecret)
#Call the api
api = tweepy.API(auth,wait_on_rate_limit=True)
#define two empty lists
followers = []
following = []
#Get the list of friends
for status in tweepy.Cursor(api.get_friends,count=200).items():
#Add the results to the list
following.append(status.screen_name)
#get the list of followers
for status in tweepy.Cursor(api.get_followers,count=200).items():
#Add the results to the list
followers.append(status.screen_name)
#compare the lists and take action
for person in following:
if person not in followers:
api.destroy_friendship(screen_name=person)
print("Unfollowed " + person)
As always, we start by importing Tweepy, and declaring our credentials variables. We used the credentials to connect to Twitter with OAuth.
Next we define two lists, as we’ll be collecting two lists of people and comparing those lists.
Using the Cursor, we get a list of all of the friends of a twitter account. As we have not specified a target account, the list returned will be that of the authenticated user.
We take the same action again, but we store a list of all the followers of the authenticated account. In the parlance of the Twitter API, a friend is someone an account follows. A follower is someone following the account.Now we have two lists: friends and followers. What we want to do next is look at all of the people we’re following. For each person we follow, we check to see if that person is following us back by looking at the list of followers. In other words, the account is following “John”. We check the list of followers. If “John” isn’t a follower, then we’re following him but he must not be following us. Rude!
That’s what the final loop does. For each person in the list of accounts that we follow, we check the other list to see if they’re in there. If they’re not following us, we call the destroy_friendship() method and feed it the person variable as an attribute. Finally, we write to the console what we’ve done so that we can sanity check the operation.Tweepy is a Python library that acts as a wrapper around the Twitter API. The easiest way to install it is to use something like Pip, i.e.:
pip install tweepy
More information about the installation of Tweepy can be found here.
We’re going to use the get_followers() method to learn about Tweepy. When learning how to use a new method, the first thing we want to do is check the documentation. The documentation for the get_followers() method is here.
Under Resource Information, we have a table of information:
Response formats | JSON |
Requires authentication? | Yes |
Rate limited? | Yes |
Requests / 15-min window (user auth) | 15 |
Requests / 15-min window (app auth) | 15 |
This tells us a number of things. One important detail about the method is that the maximum number of requests we can make is 15, every 15 minutes. Once that threshold is reached, the API will make us wait before it will accept any more requests. It might not sound like it at first, but this is quite a severe limitation. Not only can you only make a certain number of requests in a given time frame, but each request will only return a certain number of results.
Under the Parameters section on that same page, you’ll find more information about this method. The parameter that we’re interested in is called Count:
“The number of users to return per page, up to a maximum of 200. Defaults to 20.”
So, if you simply request a list of followers, the API will by default only give you 20 results. If you specify a maximum count of 200 (we’ll explore this more in a bit), you still only get a list of 200 followers. That leaves us with two questions:
We’ll explore the answer to both of these questions. First, let’s take a look at a very basic example of working with Python and the Twitter API.
All of the code examples we use in this series of blog posts will make use of the same basic setup. We need to import Tweepy, we need to set up the OAuth connection, and we need to define the API as something we can work with. If you copy this code, ensure you replace the placeholders with your own OAuth credentials.
(You’re welcome to use whichever Python IDE works best for you. I use Spyder.)
import tweepy
consumerKey = "<your API key goes here>"
consumerSecret = "<your API secret goes here>"
accessToken = "<your access token goes here>"
accessTokenSecret = "<your access token secret goes here>"
#Set the access credentials
auth = tweepy.OAuthHandler(consumerKey, consumerSecret)
auth.set_access_token(accessToken, accessTokenSecret)
#define the oauth parameters
api = tweepy.API(auth)
#Define the Twitter API and tell it to use the OAuth settings
#-----This is the end of the "set-up" portion of the script-----
for user in tweepy.Cursor(api.get_followers,count=200).items():
#Use the cursor to paginate
print(user.name)
#Print the name
The “followers” variable is where it gets interesting. We declare that “followers” is a method of the already-defined “api”. Because we simply ask it to retrieve a list of followers, it assumes we want a list of followers associated with the authenticated user. If we wanted to specify which users’ followers we wanted to return, we would add it as a parameter:
followers = api.get_followers(screen_name="ken_mcclean")
This also works with other identifiers, such as account ID #.
Having had done that, “followers” should now contain a list of the followers of the authenticated account. Remember that we haven’t addressed either of the above questions yet. We’ll get to those shortly.
Now we have a list of followers. If we loop through the list and simply print each “user”, we are presented with a massive list of attributes for each user. Assuming we only want to know the name of the user, we can specify that information using dot notation. The example above uses dot notation to specify that we only want the “name” of each “user”.
If we run the script, we should be presented with a list of twenty users. Remember, that’s the default number of results per page, and we haven’t asked the API for more than one page.
While working with the API, you may encounter an error that says something like this:
TypeError: get_followers() takes 1 positional argument but 2 were given
This generally means that you have passed a parameter to a method without specifying what that parameter means. In other words, you may have done something like this:
followers = api.get_followers("ken_mcclean")
Notice that we haven’t told the method what sort of value “ken_mcclean” actually is. The API used to take these positional arguments, but no longer does. This still leaves us with a question: what is the second positional argument referenced in the error? The “invisible” positional argument is actually the method itself!
The Twitter API uses rate limiting. In other words, you can only make so many calls or requests to the service in a given period of time.
Let’s examine the rate limiting you’d encounter if you wanted to return a list of everyone who follows your account. Consider the following code:
import tweepy
consumerKey = "<your API key goes here>"
consumerSecret = "<your API secret goes here>"
accessToken = "<your access token goes here>"
accessTokenSecret = "<your access token secret goes here>"
#Set the access credentials
auth = tweepy.OAuthHandler(consumerKey, consumerSecret)
auth.set_access_token(accessToken, accessTokenSecret)
#define the oauth parameters
api = tweepy.API(auth)
#Define the Twitter API and tell it to use the OAuth settings
#-----This is the end of the "set-up" portion of the script-----
for user in tweepy.Cursor(api.get_followers,count=200).items():
#Use the cursor to paginate
print(user.name)
#Print the name
Things have gotten slightly more complicated, but let’s unpack it.
We’re no longer declaring “followers” as a list. Instead we’re using a for-loop. The loop examines each “user” in the items that the Cursor returns. The Cursor does all the work of figuring out how many pages of results exist, and calls each page of results for you. Notice that we’re now specifying the maximum number of results per page (200).
So each time the Cursor returns a page of 200 results, we call each item that is returned the “user”, and use dot notation to return only the name of that user.
This basic setup is what we’ll use for the majority of the scripts that we explore in the series. Being able to use the Cursor, and pass methods to it, will allow you to get most of the information you desire out of Twitter. If you were interested in sourcing the list of followers of a different user, you’d simply add it as a parameter after invoking the get_followers() method:
for user in tweepy.Cursor(api.get_followers,screen_name="ken_mcclean",count=200).items():
We haven’t yet addressed the second question. What if you have more than 15 * 200 followers? The API limits you to that many results in a given time frame.
The answer is wait on rate limit. We need only change one line of code:
api = tweepy.API(auth,wait_on_rate_limit=True)
If the API tells the script that the rate limit has been exceeded, the script will now wait until the API gives the all-clear and continue running. In effect, it pauses until the API says “hey, you’re good to make another 15 requests.” Without this functionality, the script simply stops when rate limiting kicks in.
This post will hopefully have assisted you in connecting Python to your Twitter account. In the next post we’ll look at some more involved operations that may be accomplished, using Tweepy.
A great deal of the available information regarding the use of Twitter and Python is outdated. The Twitter API has undergone several major revisions in the last few years, and many of the available tutorials now only lead to frustration.
Not only has the API undergone major revisions, but there are multiple supported versions of the API. Some methods referenced by online tutorials will only work with certain other methods!
My hope for this series is to provide a clear and concise tutorial for connecting to the Twitter API using Tweepy and Python.
In order to connect to the Twitter API, your account must be provisioned for Developer access. This is a free service, at the basic level, but does require additional setup. That will be the focus of this first blog post.
You will need:
This post focuses solely on gaining Developer access, and assumes you already have the account.
While I intend for this tutorial to be quite detailed, I trust that you can handle signing up for a Twitter account on your own. Ensure that you’ve verified your email address and added a phone number to your Twitter account.
After landing on the Developer page, you’ll need to sign up by clicking the “sign up” button in the top right corner:
Next you’ll need to fill out a form. While it looks like an application form, no subsequent review of your application should be required. You should be granted access right away. I strongly urge you to say “no” when they ask about providing information to the government.
Fill out the form and click “Next”. Read the user agreement on the next page and agree to it by checking the checkbox. Click “Submit”. If you haven’t added a verified phone number to your account, you’ll get this warning:
Otherwise, the next screen you see should be this one:
Pick a name for your app. It must be unique. After you’ve named your app, click “Get keys.”
What you should see now are your API Keys and Bearer Token. Ensure that you save this information somewhere safe and secure. If you lose this information, you’ll need to regenerate it, and if you accidentally share this information someone could use it to access your account:
After saving this information, you’ll be taken to the main Developer Portal. You’ll have one Application listed, under the name you gave it. Click the key beside the gear icon to generate the rest of the information you’ll need:
Click “Generate” to generate the remaining pieces of required information:
What you’re generating are the Access Token and Access Token Secret. Much like the API keys, these pieces of information need to be stored securely, or else they’ll need to be regenerated:
You should now have four pieces of information:
With this information, we can use Python to connect to your account using the Twitter API. However, there is one more step required before this access is useful.
In order to request elevated access for your Twitter Developer account, click this link: https://developer.twitter.com/en/portal/products/elevated
The resulting sign-up process will be similar to that which you completed when initially requesting Developer access.
Most of the form should already be filled out for you. You’ll need to tell Twitter how good you are at coding:
After you’ve done that, Click “Next”.
How you fill out the remainder of the form is up to you. Twitter would like to know what you plan to do with this increased access, and if you’re planning to share the data. Again, I strongly suggest that you tell Twitter that you’re not providing information to the Government.
Submit the form, and if all goes well you should be instantly provided with Elevated Access.
I’ve been considering pivoting my career toward developer advocacy. I’m a decent developer with great customer service skills, and I can see myself doing well in such a position. In order to do that, I need to have a stronger idea of what I believe the role will entail. When I’ve established that, I’ll know which jobs to apply for.
In this post, I am attempting to answer two questions: what do developers actually do, and how can an advocate help them do that?
In previous blog posts I have discussed the role of Knowledge Management in an organization. That role is the consistent application of the organization’s ideology. Put it another way, the organization has a certain way it needs to run, and to think about things, and Knowledge Management helps with that process.
So, what is the role of the developer? When we start talking about developers, we quickly move from a very general business context to a much more specific one. Software can do many things, but in a lot of ways a developer is a developer. The offhand answer is, of course, that a developer develops software. This is a pat answer, and doesn’t get at the heart of a developer’s contribution to an organization. Whether a piece of software is intended for internal or external use, that piece of software is symptomatic of the ideology to which a company subjects itself, and the ideology that the organization wishes to be perceived as espousing. Let’s dig deeper. Consider a company, Company X. Company X is on the forefront of the Web3 movement. They’re developing products that leverage blockchain, or NFTs, or cryptocurrency. The product they put out likely markets itself as being on that cutting edge; by utilizing these technologies, Company X commits to supporting some of the tenets of the ideology (such as decentralizing finance) that come with the technology. At the same time, the software directly represents the company’s internal ideology. Is the software fully tested? Did it ship on time? Does Company X have a culture of Crunch, and overwork, and is that apparent in the product that they produce? In this way, a developer may produce a product that attempts to meet the demands of two (or more) ideological influences. Ultimately, the quality of a piece of software is directly symptomatic of the internal institutional ideology under which a developer works.Knowledge Management has an active role in shaping the ongoing ideology that an institution develops. As the curators of the information that ultimately informs ideology, the Knowledge Management team has a responsibility to be aware of not only their privilege, but also of the historic unequal privilege afforded to marginalized groups found within an organization.
Knowledge is power, and access to knowledge has long been wielded as a weapon by the ruling class. For many years the Bible was only available in Latin. This was because the ruling class (the clergy) were the only people who could read Latin. They therefor were the only ones able to interpret the bible, giving them full authority over the working class as the voice of God on earth. In the middle of the 19th century, literacy tests were given to voters. As the literacy rates of marginalized groups such as African Americans was low, this effectively disenfranchised large groups of voters. These are historic examples, but limited access to information continues to be used to shut out or keep down marginalized groups. The Freedom of Information act was passed in the USA in 1967, and in theory should have allowed ordinary citizens to request information about the government by which they were being ruled. In practice this has not been the case, with FOIA requests often being denied or coming back heavily redacted. At an organizational level, how can Knowledge Management address the presence of inequity at the level of an organization? It is important that we reaffirm the mandate of Knowledge Management: the consistent application of an organization’s ideology. If the ideology of an organization does not support equal access to information, no external force on earth can compel an organization to address those inequalities.Knowledge Management has a hand in shaping the ongoing ideological influence to which an organization is subjected. That is, Knowledge Management need not be passive; information is a living, dynamic force, and so too is the curation of knowledge an active process.
The argument is not that all groups need access to all data, but rather that access to information by certain groups should not be limited based on representational modalities. Access to information should solely be a business decision. Carefully considered access to information has the potential to address some of the cultural shortcomings that pervade all business settings. It’s not just a good idea; measured access to information is the job of Knowledge Management. What do you think?Consider, if you will, a non-technical member of staff. They have been tasked with writing promotional materials for your organization’s new software offering. In order to do so, they will need to speak the technical capabilities and nuances of the product. They have entered… the Developer Zone.
Rod Serling may be gone, but his legacy lives on wherever there is a disconnect between two worlds. In our case, the disconnect is between technical and non-technical persons. The challenge is to connect those groups without alienating either one; non-technical people may find themselves overwhelmed with jargon, and technical staff may find it difficult to dedicate time to teaching nontechnical staff about the nuances of a product.
Knowledge Management can be the bridge between these worlds. Let us examine.The goal of Knowledge Management is not simply to collect data; as we have previously discussed, the role of Knowledge Management is the consistent and considered application of information and ideology, in keeping with the ultimate goals of an organization.
When two different groups need to exchange information, Knowledge Management can act as a mediator. A robust Knowledge Management system allows for the asynchronous transfer of information; the two groups need not be in a room together. Instead, the Knowledge Management team should work with the engineering team to capture the relevant features and capabilities of the new software offering. It must be understood that this information is not transitory, but is a valuable piece of organizational information. It is the responsibility of the Knowledge Management team to capture this information in a place and manner that will allow the marketing team to access and understand it. In other words, the institutional memory of an organization must be translated to a centrally located source of truth. Subsequently, the Knowledge Management team must ensure that the information is available to the marketing team. The Knowledge Management team is not responsible for the veracity of the content of the knowledgebase, but is instead acting as the curators of a transmission and storage medium. With all that being said, we can now address an obvious question: “why can’t the engineering team just email the marketing team?”. There are two reasons for this, both of which we’ve already touched on:It is for these reasons that Knowledge Management is an essential and central part of the communications of any organization. Different teams have different strengths, and Knowledge Management serves to mitigate that disparity.
How much does your organization value the ability to say, “I don’t know”? Is wilful ignorance a core tenet of the institutional ideology to which your employees are subject?
The advantage of an effective Knowledge Management strategy is that as issues arise, the solutions to those issues may be catalogued. By resolving the issue and cataloguing the solution, the issue becomes an asset. The ability to solve the problem becomes part of the institutional memory of the organization. That ability may be monetized or harnessed, and the wheels of capitalism may continue to turn.
The disadvantage of an effective knowledgebase is that the organization no longer gets to claim ignorance of that particular issue. By cataloguing the ability to address an issue, an institution lays claim to some responsibility for it. In other words, it’s harder to pass the buck when the tools to address the issue are at hand.
This is a cynical viewpoint, but business is cynical. Let us consider an example.
In the context of a business that relies on volume, speed is king. One such example is a low-level call centre. Employees must meet the minimum standard to address a customer’s issue, in such a way that a follow-up by the customer is not necessary. In the case of large telecoms, customers often have little choice in the service provider they choose. For that reason, the organization does not benefit from employees going above and beyond, or having more knowledge than they require. The customer is effectively held captive, held hostage to the entropic and ever-conglomerating nature of big business.
Providing these employees with extraneous information would only slow them down. For that reason, one might imagine that such a business would value employees being able to say “I don’t know”.It is not enough to simply not provide information to employees. That wilful ignorance, the grease that lubricates the wheels of quick commerce, must be an affirmed part of the institutional ideology by which employees are trained and measured. Feedback and employee reviews must assert that straying into the realm of speculation runs counter to the ultimate goal of the organization.
All that to say that Knowledge Management is not simply the process of collating as much information as possible. Rather, the process is a selective one, and ultimately must support the goals of the organization. Some organizations benefit from having comprehensive knowledgebases and all the answers.
And some just want to get you off the phone.The ultimate goal of Organizational Knowledge Management is not to collect, store, and curate information. Those tasks are a means to the end with which Knowledge Management ultimate concerns itself: application of ideology.
All industries benefit from a focus on consistency, in both material and conceptual spaces. Knowledge Management is the facilitation of clean, ideologically consistent information. The ultimate fate of that information is the prerogative of the rest of the organization; Knowledge Management is only concerned with how the information is perceived and presented.
All effective organizations have a consistent ideology by which they operate. Effective ideology is symptomatic of the deliberate fostering of a workplace culture, which manifests as a central source of truth (i.e. a knowledgebase). Consistent information is the means by which all organizations may move and grow in the same direction; inconsistent information causes organizations to move in different directions, resulting in stagnation or inefficiency.
Knowledge Management is ultimately the application of a consistent ideology at an organizational level. When policy dictates that an organization has a single source of truth, the job of Knowledge Management is to ensure the consistent application of a workplace’s culture and ideology. While Knowledge Management is the application of organizational ideology, the role of the Knowledge Manager is not to police that ideology. Rather, Knowledge Managers exist to ensure that the trashcan of ideology[1] from which an organization eats is as clean (consistent) as possible.
This starkly contrasts with Personal Knowledge Management. Personal Knowledge Management is the process of externalizing and organizing internal dialogue. Organizational Knowledge Management, on the other hand, is the capturing and curating of institutional memory in such a way that it supports the ideological goals of an organization. In other words, Personal Knowledge Management is the Self capturing what would otherwise be ephemeral. Organizational Knowledge Management is the process of capturing and affirming what is known to be true.
Institutional ideology and culture flow from the top; they are a result of the policies and organizational directives set forth by decision makers, and the constituent members of an organization bring that culture to life. Unchecked, culture tends to evolve in the direction of the lowest common denominator. Knowledge Management functions as the counterbalance to that, reaffirming and reiterating what an organization has decided is True.
The organization chooses a Truth, and Knowledge Management makes it so.
References:
[1] Chicago. Zizek, Slavoj. 2009. The Sublime Object of Ideology. The Essential Žižek. London, England: Verso Books.
I worked in an inbound call centre for three years. I worked as a Level 1 analyst, meaning that the work I was doing was intended to resemble all of the previous calls that I’d taken. In other words, I wasn’t paid to come up with creative or unusual solutions.
In that time, I learned that there were three ways that clients could quickly become upset:
All three of these can be addressed by proper knowledge management practices.
Clients who feel as though they’re being given the runaround are usually being passed between reps who don’t have an answer for them. A comprehensive and organized knowledgebase can often (but not always) mitigate this.
Similarly, clients are less likely to receive the wrong answer if the knowledge base is up to date. This cannot prevent a rep from giving the wrong answer, but it can present that rep with every opportunity to give the right answer.
A knowledgebase, and strong knowledge management practices, can go a long way toward mitigating inconsistent information. So long as an employee can rely on a knowledgebase, they’ll have the confidence to consistently provide the information contained in it to clients.
The real takeaway is that a lot of customer service challenges can be addressed by providing consistent information. If the information in a knowledgebase is correct, and the reps consistently provide that information, there’s no way to go wrong. There are always edge cases, but that’s what an escalation path is for. Level 1 reps should be able to pretty much read from the screen.
This example is specific to a call centre, but is easily extrapolated to almost any business setting. Providing inconsistent information is a waste of everyone’s time, it’s a waste of money, and it speaks to unhealthy knowledge management practices.
The beautiful thing about information is that it is dynamic. It changes and grows. In spite of this, an organization is still able to provide consistent information so long as that information is centralized. Information captured in a knowledgebase can act as a central source of truth. So long as organizational policies enforce its use, the information provided to clients will always be consistent.
And that’s beautiful.
Knowledge Management exists to support an organization’s larger efforts. That is, it serves to further the larger goals of the organization; Knowledge Management is not the final goal, and rarely is captured knowledge an end unto itself. Unless an organization exists solely to capture information in something like an archival effort, there is almost certainly a larger organizational goal.
It follows that the Knowledge Management team must have strong insight into how the rest of an organization works. Knowledge ebbs and flows, and the team must be able to both capture information, as well as make it available in an appropriate and timely manner.
What is appropriate when it comes to access to knowledge? Not all information is destined to be made available to all work units. It can reasonably be said that there is a hierarchy of sensitivity, and some information meets the criteria to be protected. The Knowledge Management team must strive strike a balance between total access to information, and creating information silos.
All of that to say, the Knowledge Management team must have insight at every level of an organization. It must be able to not only ingest and digest data, but must also have a strong sense of an organization’s goals at the macro level. As much as any other work area, Knowledge Management is a collective effort by both the team and the organization.
For this reason, it may be safely asserted that Knowledge Management is driven by relationships as much as it is driven by the capture of knowledge itself. Those relationships define how effective a Knowledge Management team may be, and subsequently how useful the resulting Knowledge product will be for the organization.
Effective Knowledge Management is cooperative. Organizations may live and die by the collecting and parsing of data, and Knowledge Management is intrinsically woven into that process. The end to which Knowledge Management strives is the effective capture and recall of institutional memory. Anything less is just document storage.
There may come a day where you need to script the migration of permissions from one Confluence Space to another.
The permissions of a Confluence Space can be retrieved and treated as collection of objects. This allows us to easily pass them on to a source page as a new set of permissions.
We’re using the Soap Service to affect change in the permissions of a Space. After using the ComponentLocator to declare the SpaceSoapService, we retrieve the source Space as an object. We do the same for the destination source.
The permissions of the source Space are then extracted. This is not a single object, but rather a collection of objects.
We iterate through each of the permission objects. Each object is a collection of attributes. We need to determine if the permissions object relates to a single user, or a group. Every type of permission gets it’s own object.
Permissions objects associated with a group look like so:
[CREATEATTACHMENT,89948111,confluence-space-admins,null,null]
The first attribute is the permission type. The second is the permission ID. The third element is the group with which this permission is associated, and this is where the format of the permissions object differs in comparison to the user-oriented permissions object.
[REMOVEBLOG,89948111,null,JSMITH,null]
As you can see, if the permissions object is for a user, the username appears as the fourth element.
We need to account for this in our code.
It’s easy enough to test for a group or user permission; if the username or group is null, the object is clearly for a group or user respectively.
This matters because we need to know whether to retrieve the username or the group name when setting the permission on the destination page.
Whether we detect a group or user permission, the rest of the process is straightforward. We declare the three elements required to add permissions to a Confluence Space: permission type, user or group name, and target Space Key. The addPermissionsToSpace method of the SpacesSoapService is invoked, and the three elements are provided to it. The loop iterates through each permission retrieved from the Source Space.
import com.atlassian.confluence.spaces.SpaceManager
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.confluence.rpc.soap.services.SpacesSoapService
import com.atlassian.sal.api.component.ComponentLocator
def addSpacePermission = ComponentLocator.getComponent(SpacesSoapService)
//Define the soap service, with which we will add the permissions to the target Space
def String[] permissions = [""]
def String remoteEntity = ""
def String spaceKey = ""
//Define the three strings that hold the permissions attributes
def sourceSpace = ComponentLocator.getComponent(SpaceManager).getSpace("<SpaceKey>")
//Define a source Space object
def destinationSpace = ComponentLocator.getComponent(SpaceManager).getSpace("<SpaceKey>")
//Define a destination Space object
def sourcePermissions = sourceSpace.getPermissions()
//Grab the permissions from the source Space, so that we can apply them to the destination Space
sourcePermissions.each{perms ->
//For each permissions object that the source space contains
if(perms.getGroup() == null){
//If the group name field in the permissions object is null, this permission must be for a user
permissions = [perms.getType()]
remoteEntity = perms.getUserSubject().getName()
spaceKey = destinationSpace.getKey()
//Define the three permissions attributes that must be passed to the destination Space
addSpacePermission.addPermissionsToSpace(permissions, remoteEntity, spaceKey)
//Add the permission to the destination Space
}else if(perms.getUserSubject() == null){
//If the user name field in the permissions object is null, this permission must be for a group
permissions = [perms.getType()]
remoteEntity = perms.getGroup()
spaceKey = destinationSpace.getKey()
//Define the three permissions attributes that must be passed to the destination Space
//Note that the remoteEntity has changed, as we're now fetching the group name instead of the user name
addSpacePermission.addPermissionsToSpace(permissions, remoteEntity, spaceKey)
//Add the permission to the destination Space
}
}
Unfortunately Atlassian provide no clearly documented way of programmatically adding links to the sidebar of a Space. That doesn’t mean it’s not possible, but rather that Atlassian haven’t seen fit to document how it may be accomplished.
The question now facing us is “how are links added to a sidebar when using the Confluence web interface?” If we can answer that question, we can programmatically replicate it.
This question is answered over the course of three steps:
1. We first add a shortcut to the sidebar of a Space, using the main Confluence interface. When we do this, we can watch the network traffic that is generated by this request, and tease it apart to determine the actions we must take to replicate it.
2. By inspecting the Network Traffic, we can extract the CURL request that was sent to the Confluence server. From the CURL request, we can extract the target link:
https://<confluenceURL>/rest/ia/1.0/link
This is the link that the Confluence web interface uses to communicate the desired change to the Confluence server.
3. We then need the format of the data that was POSTed to this link. If we copy the POST data from the network inspection tab, we get the following JSON data:
{"spaceKey":"<SpaceKey>","customTitle":"http://msn.com","url":"http://msn.com"}
This is the JSON that we need to submit as the body of the POST request. It has three elements: the target Space Key, the title of the link, and the URL. You’ll need to fill in your own Space Key.
Note that the JSON includes the Space Key; many requests to the Atlassian REST API are direct calls to a page or Space, but in this instace we’re calling the generic /link/ URL.
90% of this code is focused around authenticating against Confluence. The actual call to the REST API is the last two lines; we first define the JSON to be passed to the API, and then make the POST call to the URL we discovered in the original CURL request.
$login = Get-Credential
#Rather than hard-coding credentials or using a read-host, get the credentials using a proper credential prompt
$PlainUsername = $login.GetNetworkCredential().UserName
#Convert the resulting login name to a useable string
$PlainPassword = $login.GetNetworkCredential().Password
#Convert the resulting password to a useable string without exposing it during the script execution
$pair = ($PlainUsername+':'+$plainPassword)
#Turn the username and password strings into a pair that can be converted to base64
$bytes = [System.Text.Encoding]::ASCII.GetBytes($pair)
$base64 = [System.Convert]::ToBase64String($bytes)
$basicAuthValue = "Basic $base64"
$headers = @{ Authorization = $basicAuthValue }
#Establish the credentials used to authenticate against JIRA
[String] $body ='{"spaceKey":"<SpaceKey>","customTitle":"TestURL2","url":"http://google.com"}'
#Define the body of the JSON to be passed to the URL
Invoke-RestMethod -Uri ("https://<confluenceURL>/rest/ia/1.0/link") -Method POST -Headers $headers -ContentType "application/json" -body $body
If successful, you’ll get a result that looks similar to this:
id : 4399
title : TestURL2
url : http://google.com
position : 0
styleClass : external_link
hidden : False
canHide : False
This is easily confirmed by going to the Space in question and checking the sidebar for the new Shortcut.
If I wanted to perform a search of Jira using Groovy and ScriptRunner, I might make use of the SearchRequestService. This class contains many useful methods, most of them having to do with filters.
A great many of these take a JiraServiceContext as an argument. For example, these are the parameters of the createFilter() method:
createFilter(JiraServiceContext serviceCtx, SearchRequest request)
At this point you might start to wonder what a “Jira Service Context” is, and for good reason. Atlassian has once again failed to document exactly what they want, or how to provide it. There is nothing in any of the documentation about what exactly this argument is. The arguement has it’s own page in the documentation, but that contains no useful information on actually implementing or understanding this parameter.
This is a recurring theme with Atlassian and their APIs. Rarely is the official documentation helpful or complete; instead we must rely on the kindness of strangers on the internet who post code snippets. That’s one of the major reasons I started posting the Groovy code that I wrote for Atlassian products; the vendor sure isn’t providing any help, and hopefully what I write will be of help to someone in the future.
I searched and re-searched, rephrased and retried to find a result on Google that actually implemented this argument. I finally found one, two hours and six pages back.
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.jira.bc.JiraServiceContextImpl
import com.atlassian.jira.component.ComponentAccessor
def user = ComponentAccessor.getJiraAuthenticationContext().getLoggedInUser()
JiraServiceContextImpl serviceContext = new JiraServiceContextImpl(user);
On it’s own this snippet doesn’t do much. But it does create the JiraServiceContext that is required to make use of the SearchRequestService. All the JiraServiceContext does is tell the SearchRequestService which user to run the search as. That’s ALL IT DOES. But it took several actual hours of my time to figure this out.
Big thanks to Chad Larkin and the 10-year-old post of his that actually provided a snippit that used this parameter.
It’s surprisingly difficult to simply return a list of administrators for a Jira project. I had to hunt for a relatively simple way to accomplish this. In the end I found a snippet to build on, written by Mark Markov and located here on the Atlassian forums.
I initially got frustrated because I was looking to return all of the administrators using the libraries that deal with permissions. Permissions and roles are two very different things in Jira; permissions are set up using a permission scheme, and applied to more than one Jira Project. Roles and Users are specific to each Project.
Returning the users in a Projec role requires the use of several libraries. Outlined below are two pieces of Groovy code for accomplishing this. The first returns all users in a given role, across all of the projects in Jira. The second returns the users in a role for an explicitly defined project.
As always, we start by declaring our imports. The foundation of any Jira or Confluence Groovy script is the Component Accessor. We’ll also need the Project Manager and the Project Role Manager libraries.
We’ll next need an array to hold our eventual results; ScriptRunner console provides very few ways of returning output.
Using the ComponentAccessor, we define our Project Role Manager, Project Manager, and Project List.
Lastly, we define the role we’re working with as a Role object.
For each item in projectList (that is, all of the projects in Jira), we then return the list of people in the Administrators role.
We do this by defining a Project object, which takes the name of the current project from the projectList.
We next define usersInRole, which uses the getProjectRoleActors() method to return a list of users in a role for a given project. It takes two arguments that we’ve already defined: projectRole and project.
The results of the users in the role are added to a list. Each item in the list is added to stringArray, which is returned at the end as the output.
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.security.roles.ProjectRoleManager
import com.atlassian.jira.project.ProjectManager
//Define the needed imports
def stringArray = []
//Declare an array to hold the results
def projectRoleManager = ComponentAccessor.getComponent(ProjectRoleManager.class)
//Declare a Project Role Manager
def projectManager = ComponentAccessor.getComponent(ProjectManager)
//Declare a Project Manager
def projectList = projectManager.getProjectObjects().name
//Declare a list of projects
def projectRole = projectRoleManager.getProjectRole("Administrators")
//Define a role object. In this example the role is Administrators
projectList.each{ eachProject ->
def project = projectManager.getProjectObjByName(eachProject)
//Define a project object. We're looping through all of the projects
def usersInRole = projectRoleManager.getProjectRoleActors(projectRole, project).getApplicationUsers().toList()
//We feed the projectRoleManager two already-defined arguments, projectRole and project
//The results are added to a list
usersInRole.each{
users -> stringArray.add("<br> Project name: " + eachProject + " Administrator: " + users.name)
//Add each user in the role to the
}
}
return stringArray
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.security.roles.ProjectRoleManager
import com.atlassian.jira.project.ProjectManager
//Define the needed imports
def stringArray = []
def projectRoleManager = ComponentAccessor.getComponent(ProjectRoleManager.class)
//Declare an array to hold the results
def projectManager = ComponentAccessor.getComponent(ProjectManager)
//Declare a Project Manager
def projectList = projectManager.getProjectObjects().name
//Declare a list of projects
def projectName = projectManager.getProjectObjByName("<PROJECT NAME>")
//Define a single project object
def projectRole = projectRoleManager.getProjectRole("Administrators")
//Define a role object. In this example the role is Administrators
def project = projectManager.getProjectObjByName(projectName.name)
//Define a project object, using the name of the projectName object
def usersInRole = projectRoleManager.getProjectRoleActors(projectRole, projectName).getApplicationUsers().toList()
//We feed the projectRoleManager two already-defined arguments, projectRole and projectName
//The results are added to a list
usersInRole.each{
users -> stringArray.add("<br> Project name: " + projectName.name + " Administrator: " + users.name)
//Add each user in the role to the
}
return stringArray
This script retrieves all the members of a supplied Confluence group, and then retrieves the timestamp of the last time that user logged in to Confluence.
As you can see from the code below, we’re working with three classes. The usual Component Locator, the Login Manager, and the Group Manager.
After telling the Component Locator to fetch the Login Manager and Group Manager, we feed the name of a group to the getGroup() method of the Group Manager. This returns a group as an object.
We also need to define an array that will hold our results. This is because the ScriptRunner console doesn’t easily provide output; you can’t simply throw in a System.out.println(). If you try to use the log as output, it truncates after 300 results. Instead we need to add the results to an array, and then return the array at the end.
The group object we created is passed to the getMemberNames() method of the group manager, which unsurprisingly returns the names of group members.
Next a for-each statement takes each user in the group and uses the getLoginInfo of the Login Manager to get the login information of each user.
Finally, within that loop we add the last successful login date of the user to the array, along with the username. The array is printed when the loop finishes.
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.confluence.security.login.LoginManager
import com.atlassian.user.GroupManager
//These are the libraries we'll need
def loginManager = ComponentLocator.getComponent(LoginManager)
//We'll need the login manager to get the login info of the users
def groupManager = ComponentLocator.getComponent(GroupManager)
//We'll need the group manager to get all the members of a group
def groupname = groupManager.getGroup("<group name>")
//The group manager needs a group object as input, not just a string
def array = []
//In order to get the results, we need an array to hold them
def group = groupManager.getMemberNames(groupname)
//Tell the group manager to get the members of the given group
group.each{
user ->
//For each user in the group
def loginDetails = loginManager.getLoginInfo(user)
//Fetch the login details of the user
array += ("Username: " + user + " Last login date: " + loginDetails.getLastSuccessfulLoginDate()+"<br>")
//Add the user details to the array
}
return array
//Print the array
There are two types of permissions at the Space level in Confluence: Space permissions, and Page permissions.
Page permissions are much simpler than Space permissions, for the simple reason that there are only TWO types of Page permission: VIEW and EDIT.
Simply retrieving the permissions of a Page is pretty simple. As you can see in the code below, you need only fill in a view variables: the page ID, and the type of Permission you’d like review.
import com.atlassian.confluence.pages.PageManager
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.confluence.pages.PageManager
import com.atlassian.confluence.security.ContentPermission
import com.atlassian.confluence.user.UserAccessor
def userAccessor = ComponentLocator.getComponent(UserAccessor)
def pageManager = ComponentLocator.getComponent(PageManager)
//Define the user accessor and the page manager
def currentPage = pageManager.getPage(83591282)
//Use the page manager to get a specific page
def editPermissions = currentPage.getContentPermissionSet(ContentPermission.VIEW_PERMISSION)
//Define the type of permissions to be returned
editPermissions.getUserKeys().each{names ->
//For each person with the type of permissions
log.warn(userAccessor.getUserByKey(names).getName())
//Take the user key and user it to fetch the name of the associated user
}
The page ID of any Confluence page is available under Page Information, under the ellipses in the top-right corner:
In this case the Page ID is 83591282. The other piece of information that the script needs is the type of permission to return. This is simply a matter of changing ContentPermission.VIEW_PERMISSION to reflect the desired permission, such as VIEW.
There is a method that directly returns the usernames from the userAccessor, but that method has been deprecated and Atlassian has not replaced it. Instead we must get the userKey of each person with the target permission type, and use that userKey to get the user’s name.
Worth noting is that there are two lists of permissions, when it comes to who can View a page. Under the same ellipses where you found the Page ID, there is a list of people who can view the page:
This is not the same as having VIEW permissions on the page. Instead these are users with global permissions in Confluence, likely Systems Administrators. These users will not be returned when you query a page for the persons having VIEW permissions.
Indeed, no users will be returned if you query a page that has been set to allow anyone to view. Only pages with View Restrictions will return a list of users with VIEW permissions:
This piece of code does several things. It returns all of the Keys for all of the Spaces in Confluence. For each Space, it retrieves the associated categories (labels). For those Spaces with a certain category or label, it then performs some permissions management.
Let’s start with retrieving all of the Keys. This actually starts with retrieving all of the information about all of the Spaces, with spaceManager.getAllSpaces(). Of course before we do that, we need to build the structure of the program. Here’s the bare minimum required to work with getAllSpaces():
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.confluence.spaces.SpaceManager
def spaceManager = ComponentLocator.getComponent(SpaceManager)
def spaceKeys = spaceManager.getAllSpaces()
spaceKeys.each{ space ->
return space.key
}
As always, we need to start by telling the Component Manager what to fetch for us. We then define a collection of data about all of the Spaces in Confluence. Finally, we can do something with that information. If I wanted to do something with the Key of each Space, I would work with space.key.
Now that we have a list of Keys, we can do something with that information. In this case we’re searching for Spaces with a specific label/category.
As you can see in the code below, we take the list of Space Keys. For each Space Key, we retrieve the labels/categories associated with it. If the category matches the target string, we can do something to that space.
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.confluence.spaces.SpaceManager
import com.atlassian.confluence.labels.Label
import com.atlassian.confluence.labels.LabelManager
def spaceManager = ComponentLocator.getComponent(SpaceManager)
def spaceKeys = spaceManager.getAllSpaces()
spaceKeys.each{ space ->
//For each Space in Confluence
labelManager.getTeamLabelsForSpace(space.key).each{
permission ->
//Get each label associated with the Space
if(permission.name.toString() == "application"){
//If the name of the label matches "application"
//Do something to that space
}
Adding or removing permissions from a Space is quite simple. One important difference to note is that addPermissionsToSpace() accepts an array of Strings as it’s first input; removePermissionFromSpace() only accepts a single string. That means that when adding permissions, you can feed the method an array. When removing permissions, you’ll need to iterate through the array with an each statement.
Otherwise the arguments that the two methods take are very similar. They take: permission type, user or group, and Space Key.
For example, if I wanted to grant VIEW permissions to the group INTERNAL_STAFF on Space TESTSPACE, I would assemble the statement like so:addSpacePermission.addPermissionsToSpace("VIEW", "INTERNAL_STAFF", "TESTSPACE")
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.confluence.spaces.SpaceManager
import com.atlassian.confluence.labels.Label
import com.atlassian.confluence.labels.LabelManager
def spaceManager = ComponentLocator.getComponent(SpaceManager)
def spaceKeys = spaceManager.getAllSpaces()
spaceKeys.each{ space ->
//For each Space in Confluence
labelManager.getTeamLabelsForSpace(space.key).each{ permission ->
//Get each label associated with the Space
if(permission.name.toString() == "application"){
//If the name of the label matches "application"
addSpacePermission.addPermissionsToSpace(tier1PermissionsList, "<User or group>", space.key)
//Add the permissions to that Space
}
Notice that instead of directly referencing values, we’re feeding variables to addPermissionsToSpace. It accepts the full array of permissions called tier1PermissionsList as the first arguement, a direct reference to a user or group as the second reference (this may also be a variable instead), and the Key of the current Space as the final arguement.
Below is the full script that we’ve been working toward. It does the things we’ve discussed: it locates Space Keys, retrieves labels, adds and removes permissions.
Worth noting again is that removing more than one permission type requires that you iterate through the list with an each, rather than feeding the entire array of strings to the method.import com.atlassian.confluence.pages.Page
import com.atlassian.sal.api.component.ComponentLocator
import com.atlassian.confluence.labels.Label
import com.atlassian.confluence.labels.LabelManager
import com.atlassian.confluence.spaces.SpaceManager
import com.atlassian.confluence.rpc.soap.services.SpacesSoapService
import com.atlassian.sal.api.component.ComponentLocator
def addSpacePermission = ComponentLocator.getComponent(SpacesSoapService)
def labelManager = ComponentLocator.getComponent(LabelManager)
def spaceManager = ComponentLocator.getComponent(SpaceManager)
def String[] permissionsList = ["VIEWSPACE","EDITSPACE","EXPORTPAGE","SETPAGEPERMISSIONS","REMOVEPAGE","EDITBLOG",
"REMOVEBLOG","COMMENT","REMOVECOMMENT","CREATEATTACHMENT","REMOVEATTACHMENT","REMOVEMAIL","EXPORTSPACE",
"SETSPACEPERMISSIONS"]
//We need the full list of permissions, as the removePermissions method only accepts one
type of permission at a time
def String[] internalStaffPermissions = ["REMOVECOMMENT", "VIEW", "COMMENT"]
//Internal staff have only basic permissions
def String[] tier1PermissionsList = ["VIEWSPACE","EDITSPACE","EXPORTPAGE","SETPAGEPERMISSIONS","REMOVEPAGE",
"EDITBLOG","REMOVEBLOG","COMMENT","REMOVECOMMENT","CREATEATTACHMENT","REMOVEATTACHMENT","REMOVEMAIL",
"EXPORTSPACE","SETSPACEPERMISSIONS"]
//This group gets everything except admin rights (SETSPACEPERMISSIONS
def spaceKeys = spaceManager.getAllSpaces()
//Get details for all the Spaces in the instance
spaceKeys.each{
//Iterate through the keys
space ->
labelManager.getTeamLabelsForSpace(space.key).each{ permission ->
//For each key value, get the labels for that space
if(permission.name.toString() == "application"){
//If the space has a label that matches the target
//Do something to that space
addSpacePermission.addPermissionsToSpace(permissionsList, "<User or group>", space.key)
//Grant ALL permissions to a group
addSpacePermission.addPermissionsToSpace(internalStaffPermissions, "<User or group>", space.key)
//Grant limited permissions to a group
addSpacePermission.addPermissionsToSpace(tier1PermissionsList, "<User or group>", space.key)
//Grant limited permissions to a group
addSpacePermission.addPermissionsToSpace(tier1PermissionsList, "<User or group>", space.key)
//Grant limited permissions to a group
addSpacePermission.addPermissionsToSpace(tier1PermissionsList, "<User or group>", space.key)
//Grant limited permissions to a group
permissionsList.each{ perms ->
addSpacePermission.removePermissionFromSpace(perms, "<User or group>", space.key)
//We have to iterate through the total list of permissions because we can't feed a string array
to removePermissions, like we did with addPermissions
}
}
}
}
Here’s a chart of the types of permissions that may be granted to a user or group on a Confluence Space.
These values would be useful in conjunction with a script that did something like setting permissions on a Confluence Space.
“Delete Own” is undocumented by Atlassian, but maps to a value of “REMOVEOWNCONTENT”.
“Restrictions – Add/Delete” is referenced as “Pages – Restrict” in the Atlassian documentation, which is neither clear nor helpful.
Name | Description | Programmatic Value | ||
View | View all content in the space | VIEWSPACE | ||
Pages – Create | Create new pages and edit existing ones | EDITSPACE | ||
Pages – Export | Export pages to PDF, Word | EXPORTPAGE | ||
Restrictions – Add/Delete | Set page-level permissions | SETPAGEPERMISSIONS | ||
Pages – Remove | Remove pages | REMOVEPAGE | ||
News – Create | Create news items and edit existing ones | EDITBLOG | ||
News – Remove | Remove news | REMOVEBLOG | ||
Comments – Create | Add comments to pages or news in the space | COMMENT | ||
Comments – Remove | Remove the user’s own comments | REMOVECOMMENT | ||
Attachments – Create | Add attachments to pages and news | CREATEATTACHMENT | ||
Attachments – Remove | Remove attachments | REMOVEATTACHMENT | ||
Mail – Remove | Remove mail | REMOVEMAIL | ||
Space – Export | Export space to HTML or XML | EXPORTSPACE | ||
Space – Admin | Administer the space | SETSPACEPERMISSIONS | ||
Delete Own | Delete Own Content | REMOVEOWNCONTENT |
import com.atlassian.confluence.security.SpacePermissionManager
import com.atlassian.confluence.spaces.SpaceManager
import com.atlassian.sal.api.component.ComponentLocator
//Import the libraries
def spacePermissionManager = ComponentLocator.getComponent(SpacePermissionManager)
def spaceManager = ComponentLocator.getComponent(SpaceManager)
//Invoke the Space Manager by telling the Component Locator to retrieve it for us
def sourceSpace = spaceManager.getSpace("<Space Key>")
//Tell the Space Manager which space we're querying
def admins = []
//Define the list of admins as an array
spaceManager.getSpaceAdmins(sourceSpace).each{ permission ->
admins.add(permission.name)
//For each administrator that the Space Manager returns from the target space, add that name to the array
//Do something else with the names
}
return admins
//Print the list of administrators.
As you can see, the bulk of the code is just structural, and is similar to other scripts you may have created. The key is using the Space Manager to fetch the details about the target Space, and then parsing those details for the information you need.
The basic management of Confluence Space permissions is quite trivial. However if you spend any time on the internet looking for a solution, you’ll find yourself going in circles, or starting to believe that Space permissions management is only possible via the front-end.
There are essentially two ways in which an Atlassian product may be programmatically managed. It may be done via the REST API, or you may use a plugin such a ScriptRunner that allows you to write Groovy scripts that make use of internal Atlassian classes and methods.
There is currently no obvious or easy way to use the REST API to make permissions changes to a Confluence Space on Confluence Server. Please note that this is different that permissions management of individual Confluence pages.
Instead what we need to do is look backwards, to the JSON RPC system that Atlassian used to use. What I like about these RPC calls is that I can call them using CURL, or I can access the library through the ScriptRunner Console. Here’s the basic code:import com.atlassian.confluence.rpc.soap.services.SpacesSoapService
import com.atlassian.sal.api.component.ComponentLocator
def addSpacePermission = ComponentLocator.getComponent(SpacesSoapService)
def String[] permissions = ["EDITSPACE"]
def String remoteEntity = "<UserOrGroup>"
def String spaceKey = "<spaceKey>"
addSpacePermission.addPermissionsToSpace(permissions, remoteEntity, spaceKey)
//Add permission to the target space: array of permissions, the user or group to be added, and the target space key
Notice that we’re importing the SOAP services library. The only other library we need to import is the Component Locator.
After importing the libraries, we need to tell the Component Locator to retrieve the Soap Services functionality. By defining this, we can then call the methods of the SOAP Service.
Finally, we feed the variables to the method. Please note that the method only accepts an array as the first argument; you can’t goose it and use a string.
You can also call the RPC API from within a script, using CURL.
curl --user <username>:<password> -H "Content-Type: application/json" -H "Accept: application/json" -X POST -d '{
"jsonrpc" : "2.0", "method" : "addPermissionToSpace", "params" : ["<Permission Types>"], "<User Or Group>", "<Space Key>"], "id": 7 }' <Confluence URL>/rpc/json-rpc/confluence/service-v2?os_authType=basic
It really is that simple. Hopefully Atlassian will update the REST API to allow for Space permissions management in the near future.