I am entirely self-taught when it comes to ScriptRunner and Groovy.   Everything I’ve learned has been through trial and error, and Googling different permutations of words until I find a solution.

A great deal of the information out there assumes that you already know how to work with ScriptRunner, Groovy, and Jira or Confluence.   I found this to be terrifically frustrating when I first started, as I did not have the requisite knowledge to make use of the information that I was finding.  I didn’t have the skills to put it into context, never mind making use of it in the specific use case to which I was trying to apply it.

For that reason, I’m going back to the beginning.  I’m starting an ongoing series of blog posts about how to get started with ScriptRunner for both Jira and Confluence.  You need to learn to walk before you can run, so for that reason I am calling this series the ScriptWalker series.

Not only will this hopefully be a resource for persons just starting out with ScriptRunner, but it will also force me to be sure that I can teach what I’m doing. In the end, that will make

 

Adaptavist has a tool called Microscope that Jira admins can use to look into the specifics of various aspects of the system, including workflows. If you’re looking to examine an instance, I recommend using Microscope rather than writing your own script.

However, it was requested that I look into workflows in a very specific way: the requester needed a tool that would take the name of a group and search all of the workflows for any references to that group.  In effect, they were searching for dependencies on that group within the workflows.

This script does not do that, but this is the basis upon which that script is built.  This script takes a workflow and returns the validator conditions for all of the transitions.  This could easily be adjusted to return the conditions for the triggers, etc.

 

 import com.atlassian.jira.workflow.WorkflowManager
import com.atlassian.jira.component.ComponentAccessor

def workflowManager = ComponentAccessor.getComponent(WorkflowManager)
def wf = workflowManager.getWorkflow("<workflow name>")
//wf is the workflow itself

def statuses = wf.getLinkedStatusObjects()
//statuses are the statuses available to that workflow

statuses.each {
  status ->
    //For each status associated with the workflow

    def relatedActions = wf.getLinkedStep(status).getActions()
  //relatedActions is the set of actions (transitions) associated with each status
  //log.warn("For the status " 

This was actually an interesting problem to solve.  Atlassian don’t seem to want anyone returning all of the users in an Jira instance through the API.   There’s supposedly a method for doing this, but it doesn’t work if you’re running the script through a Connect app like ScriptRunner.  This is another method that only works with an external script, as was the case with managing Cloud Confluence Space permissions.

Instead what we do is run an empty query through the user search. However, this presents its own set of challenges, as the body is only returned in a raw format. That is, instead of returning JSON, the HTTP request is returned as a byte stream. 

So after running the query, we turn it into text and then use the JSON Slurper to turn it into a JSON object with which we can work.

Despite the strangeness of the raw response, pagination, startAt, and maxResults still work, and are necessary to get all of the results.   Additionally, there is no flag in the HTTP response that pertains to the last page of results, such as “lastPage”. Therefor we must determine the final page of results ourselves.

The script starts by asserting

I spend a fair amount of time writing listeners with ScriptRunner, so that Jira will do various things for me based on certain criteria. For example, I write a lot of listeners that listen for when an issue is updated, and take action accordingly.

Until today, however, I had never thought how I might leverage the system to determine what kind of update had triggered the event.  I always just wrote the criteria in by hand, so that the listener would ignore anything I didn’t want eliciting a reaction.

The listener I was working on today got so complex, had so many nested IF statements and conditions, that it occurred to me to search for a better way.

As it turns out, the event object contains a lot of information, including which field’s change triggered the listener.  

In my own example, I was looking at the labels field.  I wanted the script to send a Slack message if the issue had been updated, but only if the labels had changed in a certain way.

The line of code required to check the type of update event doesn’t even require an import:

 def change = event?.getChangeLog()?.getRelated("ChildChangeItem")?.find {it.field == "labels"}

if (change) 

In my previous post I explored how to access the Confluence Cloud Space Permissions API endpoint. 

This Python script extends that, and gives a user a permission set in all Spaces in Confluence. This could be useful if you wanted to give one person Administrative rights on all Spaces in Confluence, for example.

Note that the user must first have READ/SPACE permission before any other permissions can be granted.

 from requests.models import Response
import requests
import json
  
headers = {
    'Authorization': 'Basic <Base-64 encoded username and password>',
    'Content-Type': 'application/json',
    'Accept': 'application/json',
}
  
userID = '<user ID (not name)>'
  
  
url='https://<url>.atlassian.net/wiki/rest/api/space/'
resp = requests.get(url, headers=headers)
data = json.loads(resp.text)
  
  
for lines in data["results"]:
  url="https://<url>.atlassian.net/wiki/rest/api/space/"+lines["key"] + '/permission'
  
  dictionary = {"subject":{"type":"user","identifier": userID},"operation":{"key":"read","target":"space"},"_links":{}}
  
  data = data=json.dumps(dictionary)
    
  try:
      response = requests.post(url=url, headers=headers, data=data)
      print(response.content)
  except:
    print("Could not add permissions to Space " + lines["key"])   

There’s a great deal of information on the internet about managing Confluence Space permissions with scripts, and how there’s no REST endpoint for it, and how it’s basically impossible.

This is incorrect.

There’s also a lot of information about using the JSONRPC or XMLRPC APIs to accomplish this.   These APIs are only available on Server/DC. In the Cloud they effectively don’t exist, so this is yet more misinformation.

So why all the confusion?

There’s a lot of outdated information out there that floats around and doesn’t disappear even after it stops being correct or relevant. This is one of the major struggles I had when I started learning how to write scripts to interact with Jira and Confluence.    Much of the information used to be relevant, but five or six or ten years later it only serves to distract people looking for a solution. That’s one of the major reasons I started this blog in the first place.

Specific to this instance, another reason for confusion is that the documentation for the REST API does outline an endpoint for Confluence Space permission management, but it includes some very strict limitations that could easily be misinterpreted.

The limitation is this: the

This script examines the membership of every group in Confluence.  It returns information in three groups:

  • The Confluence user directories
  • Information about each group in Confluence
  • Information about each user in Confluence

The output looks like the blob below.  As noted, the information is divided into three sections. Formatting is provided by the script:

 

Directory Info

Directory name: Confluence Internal Directory

Group Info

confluence-administrators has 1 members. 1 of those members are active and 0 of those members are inactive
confluence-users has 2 members. 1 of those members are active and 1 of those members are inactive

User Info

Group: confluence-administrators. Username: admin. User account is active. User is in directory: Confluence Internal Directory
Group: confluence-users. Username: admin. User account is active. User is in directory: Confluence Internal Directory
Group: confluence-users. Username: slaurin. User account is inactive. User is in directory: Confluence Internal Directory

 

 

And here is the code:

 import com.atlassian.user.GroupManager
import com.atlassian.confluence.user.DisabledUserManager
import com.atlassian.crowd.manager.directory.DirectoryManager

def disUse = ComponentLocator.getComponent(DisabledUserManager)

def userAccessor = ComponentLocator.getComponent(UserAccessor)
def groupManager = ComponentLocator.getComponent(GroupManager)

def directoryManager = ComponentLocator.getComponent(DirectoryManager)

def activeUsers = 0
def inactiveUsers = 0

def groups = groupManager.getGroups()
def groupInfoStringBuffer = ["<h1>Group Info</h1>"]
def userInfoStringBuffer = 

The Jira Cloud Migration Assistant tool (JCMA) will only migrate some types of custom fields.   The custom fields that it cannot migrate must be recreated on the Cloud side, or otherwise mitigated in some way.

I wrote a small tool that proactively identifies any custom field in a Jira instance that JCMA will not be able to migrate.

It’s important to be proactive, especially when it comes to migrations. Every unexpected error or issue results in lost time, and a delayed migration.

Here’s the code:

 import com.atlassian.jira.component.ComponentAccessor
 
def migrateableFieldTypes = [
"datepicker",
"datetime",
"textfield",
"textarea",
"grouppicker",
"labels",
"multicheckboxes",
"radiobuttons",
"multigrouppicker",
"multiuserpicker",
"float",
"select",
"multiselect",
"cascadingselect",
"userpicker",
"url",
"project",
"version",
"multiversion"
]
 
def sb = []
 
def customFields = ComponentAccessor.getCustomFieldManager().getCustomFieldObjects()
//Get all of the custom fields in the system
 
customFields.each{ customField -> 
//For each custom field
 
if(!migrateableFieldTypes.contains(customField.properties.genericValue.customfieldtypekey.split(":")[1])){
//If the custom field type does not match one of the items in the array of migrateable field types
 
sb.add(customField.getFieldName() + " | " + customField.properties.genericValue.customfieldtypekey.split(":")[1] + "<br>")
//Note that custom field
 
    }
}
 
return sb.toString().replace(",", "") 

When migrating data from Jira Server/DC to Jira Cloud, JCMA does not like tickets that have no assignee, or which have tickets with an assignee that has an inactive user status.

This script checks a list of issues, and replaces any that have a missing or inactive assignee.

The script comments should pretty well explain what’s happening with the script. By default it’s set up to check all of the issues in a given project. However, by commenting out one line and uncommenting another, a specific list of issues across any number of projects can be fed to the script.

 import com.atlassian.jira.component.ComponentAccessor
def issueManager = ComponentAccessor.getIssueManager()
def issueService = ComponentAccessor.getIssueService()
def userManager = ComponentAccessor.getUserManager()
def projectManager = ComponentAccessor.getProjectManager()

final replacementUser = "<username>"
//Define the username that will be used to replace the current assignee

final projName = "<project name>"
//Define the project to be checked

def project = projectManager.getProjectObjByName(projName).id
//Declare a project ID object, using the name of the project

def issues = ComponentAccessor.issueManager.getIssueIdsForProject(project)
//Get all the issues in the project

//final issues = ["<issues>"]
//Uncomment this line and comment out the previous one to feed the script a specific list of issues

issues.each {
  targetIssue ->
    //For each issue 

Overview

Please note: this solution was originally posted by Peter-Dave Sheehan on the Atlassian Forums. I’m just explaining how I use it.

 

Sometimes when I’m trying to solve a problem with Jira, the internal Java libraries just aren’t sufficient. They’re often not documented, or they’re opaque. 

It’s often far easier to turn to the REST API to get work done, but that’s a little more tricky on Jira DC or Server than it is on Cloud.  On Jira Cloud, a REST call could be as simple as:

 def result = get("/rest/api/2/issue/<issue key>")
.header('Content-Type', 'application/json')
.asObject(Map)

result.body.fields.comment.comments.body.each{field->
       return field
    }
}
 

 

However this won’t work on Server/DC.  Instead we need a REST framework upon which to build our script.

The Framework

This piece of code uses the currently logged in user to authenticate against the Jira REST API.  It then makes a GET call to the designated API endpoint URL.

This code can easily be changed to a POST or a PUT simply by uncommenting the payload statement and the setRequestBody statement, then changing the MethodType from GET to POST/PUT
 

The script returns a JSON blob. With point notation, we can then easily access its individual attributes, and