I wanted to create an interface in Python that had a row of icons at the top. Depending on the screen being displayed, I wanted one of those icons to be highlighted or a different color than the others. 

This proved to be more challenging than I expected.  You can set the color of all of the icons, but setting the color of a single one is a different story.

The solution that I came up with was to create a function that sets the target icon color when the program loads, rather than trying to do it as part of a KivyMD attribute of the TopAppBar widget.

  def set_icon_color(self, dt):
        screen_1_icon = self.screen_1.ids.menu_app_bar.ids.right_actions.children
        #Tell the method where to find the homescreen icon

        screen_1_icon[0].theme_icon_color = "Custom"
        screen_1_icon[0].text_color = "00ADB5"
        #define what the icon should look like

        screen_2_icon = self.screen_2.ids.menu_app_bar.ids.right_actions.children

        screen_2_icon[1].theme_icon_color = "Custom"
        screen_2_icon[1].text_color = "00ADB5"

        screen_2_icon = self.screen_3.ids.menu_app_bar.ids.right_actions.children

        screen_3_icon[2].theme_icon_color = "Custom"
        screen_3_icon[2].text_color = "00ADB5" 

 

We call this function on program load.  This sets the color of the target individual icon on each screen, without affecting the others.   When I switch to any given screen, the appropriate icon is already highlighted with a distinct color.

The other thing I had

There may come a day when you’re asked to create a large number of Confluence pages. Rather than doing it by hand, why not script it?

This Python script essentially does two things: it reads the CSV file, and it sends page creation requests to a Confluence server.   

For each row in the CSV file, it assumes the page name should be the value in the first cell of the row.  It then generates an HTML table that is sent as part of the page creation request. 

Rather than generating HTML, this could be useful for setting up a large number of template pages, to be filled in by various departments.  It could also run as a job, and automatically create a certain selection of pages every week or month, to store meeting notes or reports.

Please note that in order to connect to the Confluence server, you’ll need to generate a Personal Access Token.

 

 import csv
import requests
import json
import html
import logging

# Initialize logging
logging.basicConfig(level=logging.ERROR)

api_url = 'https://<url>.com/rest/api/content/'
#What's the URL to your Confluence DC instance?


file_path = "<your CSV file path>"
#where is the file stored locally?

parent_page_id = "<your parent page ID>"

The request on the Atlassian Forums that caught my eye last night was a request to return all Jira Cloud attachments with a certain extension.   Ordinarily it would be easy enough to cite ScriptRunner as the solution to this, but the user included data residency concerns in his initial post.

My solution to this was to write him a Python script to provide simple reporting on issues with certain extension types.    Most anything that the rest API can accomplish can be accomplished with Python; you don’t HAVE to have ScriptRunner.

The hardest part of working with external scripts is figuring out the authorization scheme. Once you’ve got that figured out, the rest is just the same REST API interaction that you’d get with UniREST and ScriptRunner for cloud.

Authorizing the Script

First, read the instructions: https://developer.atlassian.com/cloud/jira/platform/basic-auth-for-rest-apis/ 

Then:

1. Generate an API token: https://id.atlassian.com/manage-profile/security/api-tokens

2. Go to https://www.base64encode.net/ (or figure out the Python module to do the encoding)

3. Base-64 encode a string in exactly this format: youratlassianlogin:APIToken.   
If your email is john@adaptamist.com and the API token you generated is:

ATATT3xFfGF0nH_KSeZZkb_WbwJgi131SCo9N-ztA3SAySIK5w3qo9hdrxhqHZAZvimLHxbMA7ZmeYRMMNR

 

Then the string you base-64 encode is:

john@adaptamist.com:ATATT3xFfGF0nH_KSeZZkb_WbwJgi131SCo9N-ztA3SAySIK5w3qo9hdrxhqHZAZvimLHxbMA7ZmeYRMMNR

 

Do not forget the colon between the two pieces.

In my previous post I explored how to access the Confluence Cloud Space Permissions API endpoint. 

This Python script extends that, and gives a user a permission set in all Spaces in Confluence. This could be useful if you wanted to give one person Administrative rights on all Spaces in Confluence, for example.

Note that the user must first have READ/SPACE permission before any other permissions can be granted.

 from requests.models import Response
import requests
import json
  
headers = {
    'Authorization': 'Basic <Base-64 encoded username and password>',
    'Content-Type': 'application/json',
    'Accept': 'application/json',
}
  
userID = '<user ID (not name)>'
  
  
url='https://<url>.atlassian.net/wiki/rest/api/space/'
resp = requests.get(url, headers=headers)
data = json.loads(resp.text)
  
  
for lines in data["results"]:
  url="https://<url>.atlassian.net/wiki/rest/api/space/"+lines["key"] + '/permission'
  
  dictionary = {"subject":{"type":"user","identifier": userID},"operation":{"key":"read","target":"space"},"_links":{}}
  
  data = data=json.dumps(dictionary)
    
  try:
      response = requests.post(url=url, headers=headers, data=data)
      print(response.content)
  except:
    print("Could not add permissions to Space " + lines["key"])   

There’s a great deal of information on the internet about managing Confluence Space permissions with scripts, and how there’s no REST endpoint for it, and how it’s basically impossible.

This is incorrect.

There’s also a lot of information about using the JSONRPC or XMLRPC APIs to accomplish this.   These APIs are only available on Server/DC. In the Cloud they effectively don’t exist, so this is yet more misinformation.

So why all the confusion?

There’s a lot of outdated information out there that floats around and doesn’t disappear even after it stops being correct or relevant. This is one of the major struggles I had when I started learning how to write scripts to interact with Jira and Confluence.    Much of the information used to be relevant, but five or six or ten years later it only serves to distract people looking for a solution. That’s one of the major reasons I started this blog in the first place.

Specific to this instance, another reason for confusion is that the documentation for the REST API does outline an endpoint for Confluence Space permission management, but it includes some very strict limitations that could easily be misinterpreted.

The limitation is this: the

Overview

It’s possible to connect to a Jira instance using Python, and it’s possible to connect to AWS Comprehend using Python. Therefor, it is possible to marry the two, and use Python to assess the sentiment of Jira issues. There are two caveats when it comes to using this script:

  1. The script assumes you can authenticate against Jira with Basic Web Authentication. If your organization uses Single Sign On, this script would need to be amended. 
  2. The script assumes you’re working with Jira Server or Datacenter.  If you’re using Jira Cloud the approach would be different, but I’m planning to do a post about that in the near future.

The authentication method below is not mine. I have linked to the Stack Overflow page where I found it, in the script comments.

 

The Script

The script starts with three imports. We need the Jira library, logging, and the AWS library (boto3).  You’ll likely need to a PIP install of Jira and boto3, if you’ve not used them before.

After the imports we’re defining client, which we use to interact with the AWS API.  Remember to change your region to whichever region is appropriate for you, in addition

As part of my grad school course work, I had half a dozen XML files with content that needed to be analyzed for sentiment.   AWS Comprehend is a service that analyzes text in a number of ways, and one of those is sentiment analysis.

My options were to either cut and paste the content of 400 comments from these XML files, or come up with a programmatic solution.  Naturally, I chose the latter.

The XML file is formatted like so:

 

         <posts>
          <post id="123456">
            <parent>0</parent>
            <userid>user id</userid>
            <created>timestamp</created>
            <modified>timestamp</modified>
            <mailed>1</mailed>
            <subject>Post title</subject>
            <message> Message content </message> 

 

What I needed to get at was the message element of each post, as well as the post id.

The script imports BeautifulSoup to work with the XML, and boto3, to work with AWS.    We next define a string buffer, because we need to store the results of the analysis somehow.

Next we define the client, which tells AWS everything it needs to know.  Tell it the service you’re after, the AWS region, and the tokens you’d use to authenticate against AWS.

After that we provide a list of XML files that the script needs to parse, and tell it to

This is part three of my series on using Python to connect to the Twitter API.

Imagine for a moment that you had a specific vision for your Twitter account.  A vision of balance, and harmony.  What if you only followed people who also followed you?  Whether or not you want to curate your Twitter experience in this transactional way is entirely up to you. It’s your account!

We can do that with Python. As always, replace the placeholders with your own account credentials.  See Part One of this series if you’re not sure how to do that. 

Let’s take a look at the code required to do this:

 import tweepy

consumerKey = "<>"
consumerSecret = "<>"
accessToken = "<>"
accessTokenSecret = "<>"

auth = tweepy.OAuthHandler(consumerKey, consumerSecret)
auth.set_access_token(accessToken, accessTokenSecret)

#Call the api 
api = tweepy.API(auth,wait_on_rate_limit=True)

#define two empty lists
followers = []
following = []
  
#Get the list of friends
for status in tweepy.Cursor(api.get_friends,count=200).items():
    #Add the results to the list
    following.append(status.screen_name)

#get the list of followers
for status in tweepy.Cursor(api.get_followers,count=200).items():
    #Add the results to the list
    followers.append(status.screen_name)
    
#compare the lists and take action
for person in following:
    if person not in followers:
        api.destroy_friendship(screen_name=person)
        print("Unfollowed " + person)
         

As

Introduction

There are a great number of things that you might want to do with Twitter, for which the web or mobile clients don’t have facilities.  For example, you might want to run a script that automatically thanks anyone who follows you.  Or you might want to run a script that Likes any comment that someone adds to your post. 
It is worth noting that Twitter is extremely strict when it comes to automated actions around followers.  For example, it’s entirely possible to scrape the follower list of large accounts, and write a script to automatically follow all of those people. That would conceivably get you a large number of followers, and when you were done you could just write anther script to unfollow anyone who didn’t follow you back.   I promise that the Twitter API will pick up on what you’re doing, and put you in Twitter Jail.  Don’t do that.
In this post we’ll examine how to make a basic connection to the Twitter API, using Python and Tweepy.    We’ll investigate one of the errors you might encounter, and discuss the pagination of the results that the API returns. The example we’re using will return a list

Introduction

A great deal of the available information regarding the use of Twitter and Python is outdated. The Twitter API has undergone several major revisions in the last few years, and many of the available tutorials now only lead to frustration.

Not only has the API undergone major revisions, but there are multiple supported versions of the API.  Some methods referenced by online tutorials will only work with certain other methods!

My hope for this series is to provide a clear and concise tutorial for connecting to the Twitter API using Tweepy and Python.

In order to connect to the Twitter API, your account must be provisioned for Developer access.  This is a free service, at the basic level, but does require additional setup.  That will be the focus of this first blog post.

Resources Required

You will need:

  • A Twitter account with a verified email address and verified phone number
  • Developer access to that account
  • A Python IDE (I use Spyder)
  • Tweepy

This post focuses solely on gaining Developer access, and assumes you already have the account.

The Setup

While I intend for this tutorial to be quite detailed, I trust that you can handle signing up for a