Categories
blandet

Hent transaktioner ud af Nordnet – med PowerShell!

Jeg blev spurgt om man kan få mit Python-program til at hente transaktioner ud af Nordnet oversat til PowerShell. Det kan man, dog i en lidt mere rudimentær version. Her er kode til login i Nordnet og hentning af transaktionsdata for en enkelt konto/portefølje. For at få scriptet til at virke, skal du indsætte nogle værdier de rigtige steder i scriptet:

  • brugernavn og password til Nordnet
  • til- og fradato, du vil hente transaktioner for
  • kontonummer på den konto i Nordnet, du vil hente fra (din første konto har kontonummer 1 osv.

Her er koden:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$url = 'https://classic.nordnet.dk/mux/login/start.html?cmpi=start-loggain&state=signin'
$r1 = iwr $url -SessionVariable cookies
 
$url = 'https://classic.nordnet.dk/api/2/login/anonymous/'
$r2 = iwr $url -method 'POST' -Headers @{'Accept' = '*/*'} -WebSession $cookies
 
$body = @{'username'=''; 'password'=''}
$url = 'https://classic.nordnet.dk/api/2/authentication/basic/login'
$r3 = iwr $url -method 'POST' -Body $body -Headers @{'Accept' = '*/*'} -WebSession $cookies
 
$url = 'https://classic.nordnet.dk/oauth2/authorize?client_id=NEXT&response_type=code&redirect_uri=https://www.nordnet.dk/oauth2/'
$r4 = iwr $url -WebSession $cookies
 
$url = 'https://www.nordnet.dk/mediaapi/transaction/csv/filtered?locale=da-DK&account_id=1&from=2019-08-01&to=2019-10-01'
$r5 = iwr $url -WebSession $cookies

$content = $r5.Content
$encoding = [System.Text.Encoding]::unicode
$bytes = $encoding.GetBytes($content)

$decoded_content = [System.Text.Encoding]::utf32.GetString($bytes)
$decoded_content = $decoded_content.Substring(1,$decoded_content.length-1)
Categories
blandet

Hiv dine transaktioner ud af det nye Nordnet

Her er en opdatering af mit gamle program til at hente transaktioner ud fra Nordnet. Det er opdateret til at fungere med Nordnets nye design og API:

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com
""" This program logs into a Nordnet account and extracts transactions as a csv file.
Handy for exporting to Excel with as few manual steps as possible """

import requests 
from datetime import datetime
from datetime import date

# USER ACCOUNT, PORTFOLIO AND PERIOD DATA. SHOULD BE EDITED FOR YOUR NEEDS #

# Nordnet user account credentials and accounts/portfolios names (choose yourself) and numbers.
# To get account numbers go to https://www.nordnet.dk/transaktioner and change
# between accounts. The number after "accid=" in the new URL is your account number.
# If you have only one account, your account number is 1.
user = ''
password = ''
accounts = {
	"Frie midler: Nordnet": "1",
	"Ratepension": "3",
}

# Start date (start of period for transactions) and date today used for extraction of transactions
startdate = '2013-01-01'
today = date.today()
enddate = datetime.strftime(today, '%Y-%m-%d')

# Manual data lines. These can be used if you have portfolios elsewhere that you would
# like to add manually to the data set. If no manual data the variable manualdataexists
# should be set to False
manualdataexists = True
manualdata = """
Id;Bogføringsdag;Handelsdag;Valørdag;Transaktionstype;Værdipapirer;Instrumenttyp;ISIN;Antal;Kurs;Rente;Afgifter;Beløb;Valuta;Indkøbsværdi;Resultat;Totalt antal;Saldo;Vekslingskurs;Transaktionstekst;Makuleringsdato;Verifikations-/Notanummer;Depot
;30-09-2013;30-09-2013;30-09-2013;KØBT;Obligationer 3,5%;Obligationer;;72000;;;;-69.891,54;DKK;;;;;;;;;;Frie midler: Finansbanken
"""

# CREATE VARIABLES FOR LATER USE. #

# Creates a dictionary to use with cookies	
cookies = {}

# A variable to store transactions before saving to csv
transactions = ""

# LOGIN TO NORDNET #

# First part of cookie setting prior to login
url = 'https://classic.nordnet.dk/mux/login/start.html?cmpi=start-loggain&state=signin'
request = requests.get(url)
cookies['LOL'] = request.cookies['LOL']
cookies['TUX-COOKIE'] = request.cookies['TUX-COOKIE']

# Second part of cookie setting prior to login
url = 'https://classic.nordnet.dk/api/2/login/anonymous'
request = requests.post(url)
cookies['NOW'] = request.cookies['NOW']

# Actual login that gets us cookies required for later use
url = 'https://classic.nordnet.dk/api/2/authentication/basic/login'
request = requests.post(url,cookies=cookies, data = {'username': user, 'password': password})
cookies['NOW'] = request.cookies['NOW']
cookies['xsrf'] = request.cookies['xsrf']

# Getting a NEXT cookie
url = 'https://classic.nordnet.dk/oauth2/authorize?client_id=NEXT&response_type=code&redirect_uri=https://www.nordnet.dk/oauth2/'
request = requests.get(url, cookies=cookies)
cookies['NEXT'] = request.history[1].cookies['NEXT']

# GET TRANSACTION DATA #

# Payload and url for transaction requests
payload = {
'locale': 'da-DK',
'from': startdate,
'to': enddate,
}

url = "https://www.nordnet.dk/mediaapi/transaction/csv/filtered"

firstaccount = True
for portfolioname, id in accounts.items():
	payload['account_id'] = id
	data = requests.get(url, params=payload, cookies=cookies)
	result = data.content.decode('utf-16')
	result = result.replace('\t',';')

	result = result.splitlines()
	
	firstline = True
	for line in result:
		# For first account and first line, we use headers and add an additional column
		if line and firstline == True and firstaccount == True:
			transactions += line + ';' + "Depot" + "\n"
			firstaccount = False
			firstline = False
		# First lines of additional accounts are discarded
		elif line and firstline == True and firstaccount == False:
			firstline = False
		# Content lines are added
		elif line and firstline == False:
			# Fix because Nordnet sometimes adds one empty column too many
			if line.count(';') == 23:
				line = line.replace('; ',' ')
			transactions += line + ';' + portfolioname + "\n"

# ADD MANUAL LINES IF ANY #
if manualdataexists == True:
	manualdata = manualdata.split("\n",2)[2]
	transactions += manualdata

# Saves CSV
with open("transactions.csv", "w", encoding='utf8') as fout:
	fout.write(transactions)

Categories
blandet

Hent kurser – historiske og realtid – på dine værdipapirer i det nye Nordnet

Nordnet har fået nyt design og ny API. Det betyder, at der skal lidt flere krumspring til end tidligere, når man skal have fat på kurser på sine værdipapirer.

Her er et program i Python, der kan hjælpe dig. Det kræver login til Nordnet.

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com
""" This program extracts historical stock prices from Nordnet (and Morningstar as a fallback) """

import requests
from datetime import datetime
from datetime import date
import os

# Nordnet user account credentials
user = ''
password = ''

# DATE AND STOCK DATA. SHOULD BE EDITED FOR YOUR NEEDS #

# Start date (start of historical price period)
startdate = '2013-01-01'

# List of shares to look up prices for.
# Format is: Name, Morningstar id, Nordnet stock identifier
# See e.g. https://www.nordnet.dk/markedet/aktiekurser/16256554-novo-nordisk-b
# (identifier is 16256554)
# All shares must have a name (whatever you like). To get prices they must
# either have a Nordnet identifier or a Morningstar id
sharelist = [
["Maj Invest Pension","F0GBR064UH",16099877],
["Novo Nordisk B A/S","0P0000A5BQ",16256554],
["Nordnet Superfonden Danmark","F00000TH8X",""],
]

# CREATE VARIABLES FOR LATER USE. #

# A variable to store historical prices before saving to csv	
finalresult = ""
finalresult += '"date";"price";"instrument"' + '\n'

# A cookie dictionary for storing cookies
cookies = {}

# NORDNET LOGIN #

# First part of cookie setting prior to login
url = 'https://classic.nordnet.dk/mux/login/start.html?cmpi=start-loggain&state=signin'
request = requests.get(url)
cookies['LOL'] = request.cookies['LOL']
cookies['TUX-COOKIE'] = request.cookies['TUX-COOKIE']

# Second part of cookie setting prior to login
url = 'https://classic.nordnet.dk/api/2/login/anonymous'
request = requests.post(url, cookies=cookies)
cookies['NOW'] = request.cookies['NOW']

# Actual login that gets us cookies required for later use
url = "https://classic.nordnet.dk/api/2/authentication/basic/login"
request = requests.post(url,cookies=cookies, data = {'username': user, 'password': password})
cookies['NOW'] = request.cookies['NOW']
cookies['xsrf'] = request.cookies['xsrf']

# Getting a NEXT cookie
url = "https://classic.nordnet.dk/oauth2/authorize?client_id=NEXT&response_type=code&redirect_uri=https://www.nordnet.dk/oauth2/"
request = requests.get(url, cookies=cookies)
cookies['NEXT'] = request.history[1].cookies['NEXT']

# LOOPS TO REQUEST HISTORICAL PRICES AT NORDNET AND MORNINGSTAR #

# Nordnet loop to get historical prices
for share in sharelist:
	# Nordnet stock identifier and market number must both exist
	if share[2]:
		url = "https://www.nordnet.dk/api/2/instruments/historical/prices/" + str(share[2])
		payload = {"from": startdate, "fields": "last"}
		data = requests.get(url, params=payload, cookies=cookies)
		jsondecode = data.json()
		
		# Sometimes the final date is returned twice. A list is created to check for duplicates.
		datelist = []
		
		for value in jsondecode[0]['prices']:
			price = str(value['last'])
			price = price.replace(".",",")
			date = datetime.fromtimestamp(value['time'] / 1000)
			date = datetime.strftime(date, '%Y-%m-%d')
			# Only adds a date if it has not been added before
			if date not in datelist:
				datelist.append(date)
				finalresult += '"' + date + '"' + ";" + '"' + price + '"' + ";" + '"' + share[0] + '"' + "\n"

# Morningstar loop to get historical prices			
for share in sharelist:
	# Only runs for one specific fund in this instance
	if share[0] == "Nordnet Superfonden Danmark":
		payload = {"id": share[1], "currencyId": "DKK", "idtype": "Morningstar", "frequency": "daily", "startDate": startdate, "outputType": "COMPACTJSON"}
		data = requests.get("http://tools.morningstar.dk/api/rest.svc/timeseries_price/nen6ere626", params=payload)
		jsondecode = data.json()
		
		for lists in jsondecode:
			price = str(lists[1])
			price = price.replace(".",",")
			date = datetime.fromtimestamp(lists[0] / 1000)
			date = datetime.strftime(date, '%Y-%m-%d')
			finalresult += '"' + date + '"' + ";" + '"' + price + '"' + ";" + '"' + share[0] + '"' + "\n"

# WRITE CSV OUTPUT TO FILE #			

with open("kurser.csv", "w", newline='', encoding='utf8') as fout:
	fout.write(finalresult)

Categories
blandet

Wallnots Twitter-bot: Version 2

Det er ikke mange dage siden, at Wallnot.dk‘s Twitter-bot gik i luften. Du kan finde botten her og mit indlæg om den her.

Robotten virkede sådan set fint nok, men pga. en begrænsning i Twitter’s API på 250 forespørgsler per måned, kunne jeg kun opdatere 4 gange i døgnet, og det er jo ret sjældent (det gamle program lavede 2 forespørgsler, hver gang det blev kørt, dvs. 30 dage * 4 opdateringer * 2 forespørgsler = 240 forespørgsler).

Heldigvis fandt jeg TWINT, et Python-modul der gør det nemt at hente data fra Twitter uden at gøre brug af Twitter’s API med dets kedelige begrænsninger.

Med genbrug af det meste af min gamle kode, har jeg nu lavet en version af robotten, der kan køre lige så tit, jeg har lyst til. Jeg har foreløbig sat den til at køre 4 gange i timen.

For sjov skyld har jeg også tilføjet en række venlige adjektiver om abonnenterne på Politiken og Zetland, som programmet vælger tilfældigt mellem, hver gang det lægger et link på Twitter.

Den færdige kode

Her er den færdige kode, hvis du er interesseret.

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com

import requests
from bs4 import BeautifulSoup
from datetime import datetime
from datetime import date
from datetime import timedelta
import json
import time
import random
import twint	# https://github.com/twintproject/twint
from TwitterAPI import TwitterAPI

# CONFIGURATION #
# List to store articles to post to Twitter
articlestopost = []

# Yesterday's date variable
yesterday = date.today() - timedelta(days=1)
since = yesterday.strftime("%Y-%m-%d")

# Twint configuration
c = twint.Config()
c.Hide_output = True
c.Store_object = True
c.Since = since

# API LOGIN
client_key = ''
client_secret = ''
access_token = ''
access_secret = ''
api = TwitterAPI(client_key, client_secret, access_token, access_secret)


# POLITIKEN #
# Run search
searchterm = "politiken.dk/del"
c.Search = searchterm
twint.run.Search(c)
tweets = twint.output.tweets_object

# Add urls in tweets to list and remove any duplicates from list
urllist = []
for tweet in tweets:
	for url in tweet.urls:
		if searchterm in url:
			urllist.append(url)

urllist = list(set(urllist))

# Only proces urls that were not in our last Twitter query
proceslist = []
with open("./pol_lastbatch.json", "r", encoding="utf8") as fin:
	lastbatch = list(json.load(fin))
	for url in urllist:
		if url not in lastbatch:
			proceslist.append(url)
# Save current query to use for next time
with open("./pol_lastbatch.json", "wt", encoding="utf8") as fout:
	lastbatch = json.dumps(urllist)
	fout.write(lastbatch)

# Request articles and get titles and dates and sort by dates
articlelist = []
titlecheck = []

for url in proceslist:
	try:
		data = requests.get(url)
		result = data.text
		if '"isAccessibleForFree": "True"' not in result:
			soup = BeautifulSoup(result, "lxml")
			# Finds titles and timestamps
			title = soup.find('meta', attrs={'property':'og:title'})
			title = title['content']
			timestamp = soup.find('meta', attrs={'property':'article:published_time'})
			timestamp = timestamp['content']
			dateofarticle = datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S%z')
			realurl = data.history[0].headers['Location']
			if title not in titlecheck:
				articlelist.append({"title": title, "url": realurl, "date": dateofarticle})
				titlecheck.append(title)			
	except Exception as e:
		print(url)
		print(e)
			
articlelist_sorted = sorted(articlelist, key=lambda k: k['date']) 

# Check if article is already posted and update list of posted articles
with open("./pol_published.json", "r", encoding="utf8") as fin:
	alreadypublished = list(json.load(fin))
	# File below used for paywall.py to update wallnot.dk
	with open("./pol_full_share_links.json", "r", encoding="utf8") as finalready:	
		alreadypublishedalready = list(json.load(finalready))
		for art in articlelist_sorted:
			url = art['url']
			token = url.index("?shareToken")
			url = url[:token]
			if url not in alreadypublished:
				alreadypublished.append(url)
				articlestopost.append(art)
				alreadypublishedalready.append(art['url'])
		# Save updated already published links
		with open("./pol_published.json", "wt", encoding="utf8") as fout:
			alreadypublishedjson = json.dumps(alreadypublished)
			fout.write(alreadypublishedjson)
		with open("./pol_full_share_links.json", "wt", encoding="utf8") as fout:
			alreadypublishedjson = json.dumps(alreadypublishedalready)
			fout.write(alreadypublishedjson)


# ZETLAND #
# Run search
searchterm = "zetland.dk/historie"
c.Search = searchterm
twint.run.Search(c)
tweets = twint.output.tweets_object

# Add urls in tweets to list and remove any duplicates from list
urllist = []
for tweet in tweets:
	for url in tweet.urls:
		if searchterm in url:
			urllist.append(url)

urllist = list(set(urllist))

# Only proces urls that were not in our last Twitter query
proceslist = []
with open("./zet_lastbatch.json", "r", encoding="utf8") as fin:
	lastbatch = list(json.load(fin))
	for url in urllist:
		if url not in lastbatch:
			proceslist.append(url)
# Save current query to use for next time
with open("./zet_lastbatch.json", "wt", encoding="utf8") as fout:
	lastbatch = json.dumps(urllist)
	fout.write(lastbatch)

# Request articles and get titles and dates and sort by dates
articlelist = []
titlecheck = []

for url in proceslist:
	try:
		data = requests.get(url)
		result = data.text
		soup = BeautifulSoup(result, "lxml")
		title = soup.find('meta', attrs={'property':'og:title'})
		title = title['content']
		timestamp = soup.find('meta', attrs={'property':'article:published_time'})
		timestamp = timestamp['content']
		timestamp = timestamp[:timestamp.find("+")]
		dateofarticle = datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S.%f')
		if title not in titlecheck:
			articlelist.append({"title": title, "url": url, "date": dateofarticle})
			titlecheck.append(title)
	except Exception as e:
		print(url)
		print(e)
			
articlelist_sorted = sorted(articlelist, key=lambda k: k['date']) 

# Check if article is already posted and update list of posted articles
with open("./zet_published.json", "r", encoding="utf8") as fin:
	alreadypublished = list(json.load(fin))
	for art in articlelist_sorted:
		title = art['title']
		if title not in alreadypublished:
			alreadypublished.append(title)
			articlestopost.append(art)
	# Save updated already published links
	with open("./zet_published.json", "wt", encoding="utf8") as fout:
		alreadypublishedjson = json.dumps(alreadypublished, ensure_ascii=False)
		fout.write(alreadypublishedjson)


# POST TO TWITTER #
friendlyterms = ["flink","rar","gavmild","velinformeret","intelligent","sød","afholdt","bedårende","betagende","folkekær","godhjertet","henrivende","smagfuld","tækkelig","hjertensgod","graciøs","galant","tiltalende","prægtig","kær","godartet","human","indtagende","fortryllende","nydelig","venlig","udsøgt","klog","kompetent","dygtig","ejegod","afholdt","omsorgsfuld","elskværdig","prægtig","skattet","feteret"]
enjoyterms = ["God fornøjelse!", "Nyd den!", "Enjoy!", "God læsning!", "Interessant!", "Spændende!", "Vidunderligt!", "Fantastisk!", "Velsignet!", "Glæd dig!", "Læs den!", "Godt arbejde!", "Wauv!"]

if articlestopost:
	for art in articlestopost:
		if "zetland" in art['url']:
			medium = "Zetland"
		else:
			medium = "Politiken"
		friendlyterm = random.choice(friendlyterms)
		enjoyterm = random.choice(enjoyterms)
		status = "En " + friendlyterm + " abonnent på " + medium + " har delt en artikel. " + enjoyterm + " " + art['url']
		r = api.request('statuses/update', {'status': status})
		time.sleep(15)
Categories
blandet

Sådan trækker du dine data ud fra Saxo Bank med Python

Opdateret d. 20/10/2019: Nogle gange har Saxo Bank en “disclaimer” (i dette tilfælde omkring Brexit), som de vil vise, inden man får lov at logge på. Jeg har tilpasset koden, sådan programmet kan håndtere dette tilfælde.

Jeg har tidligere skrevet om, hvordan jeg trækker transaktionsdata ud fra mine konti hos Nordnet.

Nu er jeg også blevet kunde hos Saxo Bank. (Hvorfor? Mulighed for at oprette en aktiesparekonto og ingen minimumskurtage.)

Selvbetjeningsløsningen hos Saxo Bank er rigtig dårlig (sammenlignet med Nordnets).

Derfor var jeg interesseret i, om jeg kunne finde en måde at trække transaktionsdata ud fra min konto hos Saxo Bank – uden at have brug for at se på hjemmesiden.

Det kunne jeg. Måske ikke på den smarteste måde i verden, for Saxo Bank har faktisk en API-løsning, man kan bruge, hvis man har mod på at udfylde en aftale i hånden og scanne den ind (det gad jeg ikke).

Her kan du læse, hvordan jeg fik fat i mine data.

Snakke http med Saxo

Ligesom da jeg hentede mine transaktioner hos Nordnet, undersøgte jeg, hvordan min browser snakker med – og viser data fra – min konto hos Saxo Bank.

Allerførst kiggede jeg på, hvad hjemmesiden gør, når den skal vise mine data. Jeg valgte gennemførte handler i menuen:

Og så kiggede jeg på, hvad browseren gjorde. Det viser sig at hjemmesiden – fornuftigt nok – bruger Saxo Banks API, når den skal vise data til brugeren:

Jeg kunne se, at API’et modtog en lang streng (“Authorization” med ordet BEARER foran). Den gik jeg ud fra, var nødvendig, for at få data tilbage.

Så spørgsmålet var egentlig bare: Hvordan bliver sådan en BEARER-streng genereret?

Tilbage til start

For at komme frem til, hvordan BEARER-strengen genereres, gik jeg tilbage til start: Jeg gik til loginsiden og trykkede F12 i min browser (Chrome) for at følge med i netværksforespørgslerne.

Loginsidens formular sender mit brugernavn og password af sted, sammen med en streng – “AuthnRequest” – der genereres på ny hver gang, loginsiden hentes:

I mit Python-program prøver jeg at sende sådan en formular af sted, og undersøger hvad jeg får tilbage.

# Visit login page and get AuthnRequest token value from input form
url = 'https://www.saxoinvestor.dk/Login/da/'
r = requests.get(url)

soup = BeautifulSoup(r.text, "html.parser")
input = soup.find_all('input', {"id":"AuthnRequest"})
authnrequest = input[0]["value"]

# Login step 1: Submit username, password and token and get another token back
url = 'https://www.saxoinvestor.dk/Login/da/'
r = requests.post(url, data = {'field_userid': user, 'field_password': password, 'AuthnRequest': authnrequest})

Lidt forkortet ser det sådan her ud:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta name="Application-State" content="service=IDP;federated=False;env=Live;state=Ok;authenticated=True;"><meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<noscript><p><strong>Note:</strong> Since your browser does not support Javascript, you must press the Continue button to proceed.</p></noscript>
<form id="form" action="https://www.saxoinvestor.dk/investor/login.sso.ashx" method="post"><div>
<input type="hidden" name="SAMLResponse" value="PHNhbWxwOlJlc3BvbnNlIElEPSJfNjQzZGI4ODQtMDMzZi00MWVhLWE4ZjEtYzVjOWVlMWIxM2IwIiBJblJlc3BvbnNlVG89Il9mN2E3ODBlZi0yZjdmLTQyYmItODk1[...]G9uc2U+"/>
<input type="hidden" name="RelayState" value=""/>
<input type="hidden" name="PageLoadInfo" id="PageLoadInfo" value=""/></div>
<noscript><div>
<input type="submit" value="Continue"/></div></noscript></form><script type="text/javascript">function doSubmit(){var t=-1;if(window.location.hash){var m=window.location.hash.match(/\/lst\/(\d+)/);if(m) t=parseInt(m[1]);}if(t>=0 &amp;&amp; document.getElementById("PageLoadInfo").value=='')document.getElementById("PageLoadInfo").value=t;document.forms.form.submit();}</script><script  type="text/javascript">doSubmit();</script></body></html>

Og hvad er det så? En formular (“<input>”) til browsere uden Javascript, med et felt der hedder “SAMLResponse” med en lang (her forkortet) streng som værdi.

I Chrome kan jeg se, at allersidste trin. inden jeg når ind på forsiden af selvbetjeningen, faktisk er, at min browser sender “SAMLResponse” af sted til en side, der hedder “login.sso.ashx”:

Så jeg sender trygt formularen af sted med SAMLResponse-.værdien og ser, hvad jeg får tilbage:

soup = BeautifulSoup(r.text, "html.parser")
input = soup.find_all('input', {"name":"SAMLResponse"})
samlresponse = input[0]["value"]

# Login step 2: Get bearer token necessary for API requests
url = 'https://www.saxoinvestor.dk/investor/login.sso.ashx'
r = requests.post(url, data = {'SAMLResponse': samlresponse})

Og vupti: Siden videresender mig til API’et med et BEARER-token, jeg kan benytte mig af. Det får jeg fat i (og skærer lidt til) sådan her::

bearer = r.history[0].headers['Location']
bearer = bearer[bearer.find("BEARER"):bearer.find("/exp/")]
bearer = bearer.replace("%20"," ")

Og så er jeg ellers klar til at hente data fra API’et. Det starter sådan her:

# START API CALLS
# Documentation at https://www.developer.saxo/openapi/learn

# Set bearer token as header
headers = {'Authorization': bearer}

# First API request gets Client Key which is used for most API calls
# See https://www.developer.saxo/openapi/learn/the-tutorial for expected return data
url = 'https://www.saxoinvestor.dk/openapi/port/v1/clients/me'
r = requests.get(url, headers=headers)

clientdata = r.json()
clientkey = clientdata['ClientKey']

Hele programmet

Hele programmet – inklusive den måde, jeg omdanner Saxo Bank-data til samme format som Nordnets transaktionsdata – er her.

Du må gøre præcis som du har lyst til med det (på eget ansvar).

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com
"""This program logs into a Saxo Bank account and lets you make API requests."""

import requests 
from datetime import datetime
from datetime import date
from bs4 import BeautifulSoup
import json

# USER ACCOUNT AND PERIOD DATA. SHOULD BE EDITED FOR YOUR NEEDS #

# Saxo user account credentials
user = '' # your user id
password = '' # your password

# Start date (start of period for transactions) and date today used for extraction of transactions
startdate = '2019-01-01'
today = date.today()
enddate = datetime.strftime(today, '%Y-%m-%d')

# LOGIN TO SAXO BANK
	
# Visit login page and get AuthnRequest token value from input form
url = 'https://www.saxoinvestor.dk/Login/da/'
r = requests.get(url)

soup = BeautifulSoup(r.text, "html.parser")
input = soup.find_all('input', {"id":"AuthnRequest"})
authnrequest = input[0]["value"]

# Login step 1: Submit username, password and token and get another token back
url = 'https://www.saxoinvestor.dk/Login/da/'
r = requests.post(url, data = {'field_userid': user, 'field_password': password, 'AuthnRequest': authnrequest})

soup = BeautifulSoup(r.text, "html.parser")
input = soup.find_all('input', {"name":"SAMLResponse"})
# Most of the time this works
if input:
	samlresponse = input[0]["value"]
# But sometimes there's a disclaimer that Saxo Bank would like you to accept
else:
	input = soup.find_all('input')
	inputs = {}
	try:
		for i in input:
			inputs[i['name']] = i['value']
	except:
		pass
	url = 'https://www.saxotrader.com/disclaimer'
	request = requests.post(url, data=inputs)
	cook = request.cookies['DisclaimerApp']
	returnurl = cook[cook.find("ReturnUrl")+10:cook.find("&amp;IsClientStation")]
	url = 'https://live.logonvalidation.net/complete-app-consent/' + returnurl[returnurl.find("complete-app-consent/")+21:]
	request = requests.get(url)
	soup = BeautifulSoup(request.text, "html.parser")
	input = soup.find_all('input', {"name":"SAMLResponse"})
	samlresponse = input[0]["value"]

# Login step 2: Get bearer token necessary for API requests
url = 'https://www.saxoinvestor.dk/investor/login.sso.ashx'
r = requests.post(url, data = {'SAMLResponse': samlresponse})

bearer = r.history[0].headers['Location']
bearer = bearer[bearer.find("BEARER"):bearer.find("/exp/")]
bearer = bearer.replace("%20"," ")

# START API CALLS
# Documentation at https://www.developer.saxo/openapi/learn

# Set bearer token as header
headers = {'Authorization': bearer}

# First API request gets Client Key which is used for most API calls
# See https://www.developer.saxo/openapi/learn/the-tutorial for expected return data
url = 'https://www.saxoinvestor.dk/openapi/port/v1/clients/me'
r = requests.get(url, headers=headers)

clientdata = r.json()
clientkey = clientdata['ClientKey']

# Example API call #1
url = 'https://www.saxoinvestor.dk/openapi/cs/v1/reports/aggregatedAmounts/' + clientkey + '/' + startdate + '/' + enddate + '/'
r = requests.get(url, headers=headers)
data = r.json()

# Working on that data to add some transaction types to personal system
saxoaccountname = "Aktiesparekonto: Saxo Bank"
currency = "DKK"
saxotransactions = ""

for item in data['Data']:
	if item['AffectsBalance'] == True:
		date = item['Date']
		amount = item['Amount']
		amount_str = str(amount).replace(".",",")
		if item['UnderlyingInstrumentDescription'] == 'Cash deposit or withdrawal' or item['UnderlyingInstrumentDescription'] == 'Cash inter-account transfer':
			if amount > 0:
				transactiontype = 'INDBETALING'
			elif amount < 0:
				transactiontype = 'HÆVNING'
			saxotransactions += ";" + date + ";" + date + ";" + date + ";" + transactiontype + ";;;;;;;;" + amount_str + ";" + currency + ";;;;;;;;;" + saxoaccountname + "\r\n"
		if item['AmountTypeName'] == 'Corporate Actions - Cash Dividends':
			transactiontype = "UDB."
			if item['InstrumentDescription'] == "Novo Nordisk B A/S":
				paper = "Novo B"
				papertype = "Aktie"
			if item['InstrumentDescription'] == "Tryg A/S":
				paper = "TRYG"
				papertype = "Aktie"
			saxotransactions += ";" + date + ";" + date + ";" + date + ";" + transactiontype + ";" + paper + ";" + papertype + ";;;;;;" + amount_str + ";" + currency + ";;;;;;;;;" + saxoaccountname + "\n"

# Example API call #2		
url = "https://www.saxoinvestor.dk/openapi/cs/v1/reports/trades/" + clientkey + "?fromDate=" + startdate + "&amp;" + "toDate=" + enddate
r = requests.get(url, headers=headers)
data = r.json()

# Working on that data to add trades to personal system
for item in data['Data']:
	date = item['AdjustedTradeDate']
	numberofpapers = str(int(item['Amount']))
	amount_str = str(item['BookedAmountAccountCurrency']).replace(".",",")
	priceperpaper = str(item['BookedAmountAccountCurrency'] / item['Amount']).replace(".",",")
	if item['TradeEventType'] == 'Bought':
		transactiontype = "KØBT"
	if item['AssetType'] == 'Stock':
		papertype = "Aktie"
	if item['InstrumentDescription'] == "Novo Nordisk B A/S":
		paper = "Novo B"
		isin = "DK0060534915"
	if item['InstrumentDescription'] == "Tryg A/S":
		paper = "TRYG"
		isin = "DK0060636678"
	saxotransactions += ";" + date + ";" + date + ";" + date + ";" + transactiontype + ";" + paper + ";" + papertype + ";" + isin + ";" + numberofpapers + ";" + priceperpaper + ";;;" + amount_str + ";" + currency + ";;;;;;;;;" + saxoaccountname + "\n"
Categories
blandet

Sådan trækker jeg links til gratis Zetland-artikler ud fra Zetlands Twitter-konto til wallnot.dk

wallnot.dk udgiver jeg en liste over gratisartikler fra en lang række medier, der benytter sig af betalingsmure/paywall. Siden er ment som en service til brugere, der ved, at de gerne vil læse nyhedsartikler, og at de ikke vil betale for dem.

Zetland er ikke som de andre aviser. Der er ikke en forside med links til alle nypublicerede artikler. I stedet bruger Zetland Twitter til at lægge appetitvækkere ud.

Jeg syntes det var ærgerligt ikke at have Zetland med på Wallnot, så i stedet for at kigge efter links på forsiden, som Wallnot gør hos de andre medier, brugte jeg Twitters API til at trække artikellinks ud.

Her kan du se, hvordan jeg gjorde. Hvis du gerne vil prøve programmet af, skal du registrere dig som udvikler på Twitter.

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com
""" This program uses the Twitter API to get a list of free articles from Zetland """

import requests
from bs4 import BeautifulSoup
from datetime import datetime
from datetime import date
import json
from nested_lookup import nested_lookup
import base64


# GETS TWITTER DATA #

# Key and secret from Twitter developer account: https://developer.twitter.com/en/apply/user
client_key = ''
client_secret = ''

# Key and secret encoding, preparing for Twitter request
key_secret = '{}:{}'.format(client_key, client_secret).encode('ascii')
b64_encoded_key = base64.b64encode(key_secret)
b64_encoded_key = b64_encoded_key.decode('ascii')

base_url = 'https://api.twitter.com/'
auth_url = '{}oauth2/token'.format(base_url)

auth_headers = {
	'Authorization': 'Basic {}'.format(b64_encoded_key),
	'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8'
}

auth_data = {
	'grant_type': 'client_credentials'
}

auth_resp = requests.post(auth_url, headers=auth_headers, data=auth_data)
auth_resp.json().keys()
access_token = auth_resp.json()['access_token']

search_headers = {
	'Authorization': 'Bearer {}'.format(access_token)
}

# Search parameters for Zetland tweets
search_params = {
	'user_id': 452898921,
	'count': 35,
	'tweet_mode': 'extended',
	'exclude_replies': 'true',
	'trim_user': 'true'
}

# Request url for searching user timelines
search_url = '{}1.1/statuses/user_timeline.json'.format(base_url)

# Request to Twitter
search_resp = requests.get(search_url, headers=search_headers, params=search_params)

# Response from Twitter in json format
tweet_data = search_resp.json()
#prettyjson = json.dumps(tweet_data, ensure_ascii=False, indent=4) # Only needed for debugging to pretify json

# Looks for all instances of expanded_url (that is, links) in json	
linklist = list(set(nested_lookup('expanded_url', tweet_data)))

# Populates a list of links to Zetland articles
urllist = []
for link in linklist:
	if "zetland.dk/historie" in link:
		urllist.append(link)

		
# GETS ARTICLE DATA FROM ZETLAND #
		
# Requests articles and get titles and dates and sort by dates directly from Zetland site
articlelist = []
titlecheck = []

for url in urllist:
	try:
		data = requests.get(url)
		result = data.text

		# Soup site and create a dictionary of links and their titles and dates
		articledict = {}
		soup = BeautifulSoup(result, "lxml")

		title = soup.find('meta', attrs={'property':'og:title'})
		title = title['content']
		
		timestamp = soup.find('meta', attrs={'property':'article:published_time'})
		timestamp = timestamp['content']
		timestamp = timestamp[:timestamp.find("+")]
		dateofarticle = datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S.%f')
		
		if title not in titlecheck:
			articlelist.append({"title": title, "url": url, "date": dateofarticle})
			titlecheck.append(title)
	except:
		print(url)


# PREPARES LIST OF ARTICLES FOR WALLNOT.DK #

# Sort articles by date (newest first)		
articlelist_sorted = sorted(articlelist, key=lambda k: k['date'], reverse=True) 

# Removes articles older than approximately three months
articlelist_recent = []
now = datetime.now()
for article in articlelist_sorted:
	timesincelast = now - article["date"]
	if timesincelast.days < 92:
		articlelist_recent.append(article)

# Converts dates to friendly format for display and outputs articles as html paragraphs
zet_linkstr = ""
for article in articlelist_recent:
	friendlydate = article["date"].strftime("%d/%m")
	zet_linkstr += '<p>' + friendlydate + ': ' + '<a href="' + article["url"] + '">' + article["title"] + '</a></p>\n' 

# Prints list of articles	
print(zet_linkstr)