Pakkesporing fra flere forskellige transportører

For tiden øver jeg mig i at bruge Django – et værktøj til at lave webapplikationer i Python. Det er vildt smart.

Det tog et par timer at få https://wallnot.dk/pakker/ i luften, men så er der heller ikke gjort noget ud af brugerfladen og det bagvedliggende kunne helt sikkert også gøres smartere. Siden kan bruges til at spore pakker til levering fra flere forskellige transportører (PostNord, GLS, DAO).

Hvis du har pakker på vej fra andre transportører og vil dele pakkenumrene med mig, er jeg interesseret.

Et nyt bud på en simulation af krig

Min Python-simulation af kortspillet Krig var ikke særlig elegant. Ved krig og dobbelt-krig osv. var en masse “if”-sætninger inde i hinanden med samme logik. (Jeg fandt også nogle dumme fejl, så jeg har opdateret det oprindelige indlæg.)

Derfor har jeg prøvet at skrive en ny version.

Den fungerer fint og giver følgende output ved 1.000.000 spil:

Der blev spillet 1000000 spil
Det gennemsnitlige antal dueller var 177.217668
Det højeste antal dueller var 2238
Det laveste antal dueller var 3
Den spiller med højest sum af kort vandt 573276 gange (57%)
Den spiller med højest sum af kort tabte 397771 gange (40%)
Uafgjorte spil: 1
Antal enkeltkrig, dobbeltkrig, osv.: 12348559, 886651, 60655, 3722, 218, 11, 2
Vendte kort uden krig og med krig: 176766958, 13299818
Spillene tog 225.4 sekunder

Det nye program:

# KRIG #
import time
start_time = time.time()
import random

number_of_games_to_play = 1000000
number_of_games_counter = 0
number_of_plays_list = []
highest_deck_won = 0
highest_deck_lost = 0
equal_games = 0
war_types = [0,0,0,0,0,0,0]
war_or_not_war = [0,0]

# Loop to play games
percentage_copy = 0
i = 0
while i < number_of_games_to_play:
	# One is added to i so loop finishes once number of games have been played
	i += 1
	
	# Prints percentage done with 1 decimal every time it changes
	percentage_completed = round((i/number_of_games_to_play*100), 1)
	if percentage_copy != percentage_completed:
		print("{}% done".format(percentage_completed))
	percentage_copy = percentage_completed

	# Create a deck, shuffle it and divide between players
	deck = [2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,6,6,6,6,7,7,7,7,8,8,8,8,9,9,9,9,10,10,10,10,11,11,11,11,12,12,12,12,13,13,13,13,14,14,14,14]
	random.shuffle(deck)
	player_a_deck = deck[0:26]
	player_b_deck = deck[26:52]

	# Which player has the highest sum of cards
	card_sum_a = sum(player_a_deck)
	card_sum_b = sum(player_b_deck)
	if card_sum_a > card_sum_b:
		highest_deck = "a"
	elif card_sum_a < card_sum_b:
		highest_deck = "b"
	else:
		highest_deck = "equal"
	
	# Loop to turn cards within games
	number_of_plays = 0
	index = 1
	while True:
		try:
			if index == 1:
				number_of_plays += 1	# Add 1 to number of plays counter
				war_count = 0			# Reset war counter	
			# Player a has the largest card
			if player_a_deck[index-1] > player_b_deck[index-1]:
				war_or_not_war[0] += 1
				player_a_deck.extend(player_a_deck[:index])
				player_a_deck.extend(player_b_deck[:index])	
				del player_a_deck[:index]
				del player_b_deck[:index]
				index = 1			# If a play is decided, index is reset
			# Player b has the largest card
			elif player_a_deck[index-1] < player_b_deck[index-1]:
				war_or_not_war[0] += 1
				# Cards are added in different order to deck in order to avoid (game) risk of going on forever (infinite loop)!
				player_b_deck.extend(player_b_deck[:index])	
				player_b_deck.extend(player_a_deck[:index])
				del player_a_deck[:index]
				del player_b_deck[:index]
				index = 1			# If a play is decided, index is reset
			# War is on!
			else:
				war_or_not_war[1] += 1
				index += 4			# In case of war the index is upped by four cards
				war_types[war_count] += 1
				war_count += 1
		# If a player has too few cards left to participate, game is over
		except IndexError:
			# If a player had no cards left and index is 1, the game was already over, so number of plays is corrected
			if index == 1:
				number_of_plays -= 1
			break
	
	# Single game is over #
	# Compare deck sizes to decide winner and add values to counters and lists
	deck_a = len(player_a_deck)
	deck_b = len(player_b_deck)
	if deck_a > deck_b:
		if highest_deck == "a":
			highest_deck_won += 1
		elif highest_deck == "b":
			highest_deck_lost += 1
	elif deck_a < deck_b:
		if highest_deck == "a":
			highest_deck_lost += 1
		elif highest_deck == "b"    :
			highest_deck_won += 1
	else:
		equal_games += 1
	
	number_of_plays_list.append(number_of_plays)
	number_of_games_counter += 1
	
# All games are over #
print("Der blev spillet {} spil".format(number_of_games_counter))
print("Det gennemsnitlige antal dueller var {}".format(sum(number_of_plays_list)/len(number_of_plays_list)))
print("Det højeste antal dueller var {}".format(max(number_of_plays_list)))
print("Det laveste antal dueller var {}".format(min(number_of_plays_list)))
print("Den spiller med højest sum af kort vandt {} gange ({}%)".format(highest_deck_won, round(highest_deck_won/number_of_games_counter*100)))
print("Den spiller med højest sum af kort tabte {} gange ({}%)".format(highest_deck_lost, round(highest_deck_lost/number_of_games_counter*100)))
print("Uafgjorte spil: {}".format(equal_games))
print("Antal enkeltkrig, dobbeltkrig, osv.: {}".format(", ".join(str(x) for x in war_types)))
print("Vendte kort uden krig og med krig: {}".format(", ".join(str(x) for x in war_or_not_war)))
print("Spillene tog {} sekunder".format(round(time.time() - start_time, 1)))

Hent transaktioner ud af Nordnet – med PowerShell!

Jeg blev spurgt om man kan få mit Python-program til at hente transaktioner ud af Nordnet oversat til PowerShell. Det kan man, dog i en lidt mere rudimentær version. Her er kode til login i Nordnet og hentning af transaktionsdata for en enkelt konto/portefølje. For at få scriptet til at virke, skal du indsætte nogle værdier de rigtige steder i scriptet:

  • brugernavn og password til Nordnet
  • til- og fradato, du vil hente transaktioner for
  • kontonummer på den konto i Nordnet, du vil hente fra (din første konto har kontonummer 1 osv.

Her er koden:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$url = 'https://classic.nordnet.dk/mux/login/start.html?cmpi=start-loggain&state=signin'
$r1 = iwr $url -SessionVariable cookies
 
$url = 'https://classic.nordnet.dk/api/2/login/anonymous/'
$r2 = iwr $url -method 'POST' -Headers @{'Accept' = '*/*'} -WebSession $cookies
 
$body = @{'username'=''; 'password'=''}
$url = 'https://classic.nordnet.dk/api/2/authentication/basic/login'
$r3 = iwr $url -method 'POST' -Body $body -Headers @{'Accept' = '*/*'} -WebSession $cookies
 
$url = 'https://classic.nordnet.dk/oauth2/authorize?client_id=NEXT&response_type=code&redirect_uri=https://www.nordnet.dk/oauth2/'
$r4 = iwr $url -WebSession $cookies
 
$url = 'https://www.nordnet.dk/mediaapi/transaction/csv/filtered?locale=da-DK&account_id=1&from=2019-08-01&to=2019-10-01'
$r5 = iwr $url -WebSession $cookies

$content = $r5.Content
$encoding = [System.Text.Encoding]::unicode
$bytes = $encoding.GetBytes($content)

$decoded_content = [System.Text.Encoding]::utf32.GetString($bytes)
$decoded_content = $decoded_content.Substring(1,$decoded_content.length-1)

Hiv dine transaktioner ud af det nye Nordnet

Her er en opdatering af mit gamle program til at hente transaktioner ud fra Nordnet. Det er opdateret til at fungere med Nordnets nye design og API:

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com
""" This program logs into a Nordnet account and extracts transactions as a csv file.
Handy for exporting to Excel with as few manual steps as possible """

import requests 
from datetime import datetime
from datetime import date

# USER ACCOUNT, PORTFOLIO AND PERIOD DATA. SHOULD BE EDITED FOR YOUR NEEDS #

# Nordnet user account credentials and accounts/portfolios names (choose yourself) and numbers.
# To get account numbers go to https://www.nordnet.dk/transaktioner and change
# between accounts. The number after "accid=" in the new URL is your account number.
# If you have only one account, your account number is 1.
user = ''
password = ''
accounts = {
	"Frie midler: Nordnet": "1",
	"Ratepension": "3",
}

# Start date (start of period for transactions) and date today used for extraction of transactions
startdate = '2013-01-01'
today = date.today()
enddate = datetime.strftime(today, '%Y-%m-%d')

# Manual data lines. These can be used if you have portfolios elsewhere that you would
# like to add manually to the data set. If no manual data the variable manualdataexists
# should be set to False
manualdataexists = True
manualdata = """
Id;Bogføringsdag;Handelsdag;Valørdag;Transaktionstype;Værdipapirer;Instrumenttyp;ISIN;Antal;Kurs;Rente;Afgifter;Beløb;Valuta;Indkøbsværdi;Resultat;Totalt antal;Saldo;Vekslingskurs;Transaktionstekst;Makuleringsdato;Verifikations-/Notanummer;Depot
;30-09-2013;30-09-2013;30-09-2013;KØBT;Obligationer 3,5%;Obligationer;;72000;;;;-69.891,54;DKK;;;;;;;;;;Frie midler: Finansbanken
"""

# CREATE VARIABLES FOR LATER USE. #

# Creates a dictionary to use with cookies	
cookies = {}

# A variable to store transactions before saving to csv
transactions = ""

# LOGIN TO NORDNET #

# First part of cookie setting prior to login
url = 'https://classic.nordnet.dk/mux/login/start.html?cmpi=start-loggain&state=signin'
request = requests.get(url)
cookies['LOL'] = request.cookies['LOL']
cookies['TUX-COOKIE'] = request.cookies['TUX-COOKIE']

# Second part of cookie setting prior to login
url = 'https://classic.nordnet.dk/api/2/login/anonymous'
request = requests.post(url)
cookies['NOW'] = request.cookies['NOW']

# Actual login that gets us cookies required for later use
url = 'https://classic.nordnet.dk/api/2/authentication/basic/login'
request = requests.post(url,cookies=cookies, data = {'username': user, 'password': password})
cookies['NOW'] = request.cookies['NOW']
cookies['xsrf'] = request.cookies['xsrf']

# Getting a NEXT cookie
url = 'https://classic.nordnet.dk/oauth2/authorize?client_id=NEXT&response_type=code&redirect_uri=https://www.nordnet.dk/oauth2/'
request = requests.get(url, cookies=cookies)
cookies['NEXT'] = request.history[1].cookies['NEXT']

# GET TRANSACTION DATA #

# Payload and url for transaction requests
payload = {
'locale': 'da-DK',
'from': startdate,
'to': enddate,
}

url = "https://www.nordnet.dk/mediaapi/transaction/csv/filtered"

firstaccount = True
for portfolioname, id in accounts.items():
	payload['account_id'] = id
	data = requests.get(url, params=payload, cookies=cookies)
	result = data.content.decode('utf-16')
	result = result.replace('\t',';')

	result = result.splitlines()
	
	firstline = True
	for line in result:
		# For first account and first line, we use headers and add an additional column
		if line and firstline == True and firstaccount == True:
			transactions += line + ';' + "Depot" + "\n"
			firstaccount = False
			firstline = False
		# First lines of additional accounts are discarded
		elif line and firstline == True and firstaccount == False:
			firstline = False
		# Content lines are added
		elif line and firstline == False:
			# Fix because Nordnet sometimes adds one empty column too many
			if line.count(';') == 23:
				line = line.replace('; ',' ')
			transactions += line + ';' + portfolioname + "\n"

# ADD MANUAL LINES IF ANY #
if manualdataexists == True:
	manualdata = manualdata.split("\n",2)[2]
	transactions += manualdata

# Saves CSV
with open("transactions.csv", "w", encoding='utf8') as fout:
	fout.write(transactions)

Hent kurser – historiske og realtid – på dine værdipapirer i det nye Nordnet

Nordnet har fået nyt design og ny API. Det betyder, at der skal lidt flere krumspring til end tidligere, når man skal have fat på kurser på sine værdipapirer.

Her er et program i Python, der kan hjælpe dig. Det kræver login til Nordnet.

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com
""" This program extracts historical stock prices from Nordnet (and Morningstar as a fallback) """

import requests
from datetime import datetime
from datetime import date
import os

# Nordnet user account credentials
user = ''
password = ''

# DATE AND STOCK DATA. SHOULD BE EDITED FOR YOUR NEEDS #

# Start date (start of historical price period)
startdate = '2013-01-01'

# List of shares to look up prices for.
# Format is: Name, Morningstar id, Nordnet stock identifier
# See e.g. https://www.nordnet.dk/markedet/aktiekurser/16256554-novo-nordisk-b
# (identifier is 16256554)
# All shares must have a name (whatever you like). To get prices they must
# either have a Nordnet identifier or a Morningstar id
sharelist = [
["Maj Invest Pension","F0GBR064UH",16099877],
["Novo Nordisk B A/S","0P0000A5BQ",16256554],
["Nordnet Superfonden Danmark","F00000TH8X",""],
]

# CREATE VARIABLES FOR LATER USE. #

# A variable to store historical prices before saving to csv	
finalresult = ""
finalresult += '"date";"price";"instrument"' + '\n'

# A cookie dictionary for storing cookies
cookies = {}

# NORDNET LOGIN #

# First part of cookie setting prior to login
url = 'https://classic.nordnet.dk/mux/login/start.html?cmpi=start-loggain&state=signin'
request = requests.get(url)
cookies['LOL'] = request.cookies['LOL']
cookies['TUX-COOKIE'] = request.cookies['TUX-COOKIE']

# Second part of cookie setting prior to login
url = 'https://classic.nordnet.dk/api/2/login/anonymous'
request = requests.post(url, cookies=cookies)
cookies['NOW'] = request.cookies['NOW']

# Actual login that gets us cookies required for later use
url = "https://classic.nordnet.dk/api/2/authentication/basic/login"
request = requests.post(url,cookies=cookies, data = {'username': user, 'password': password})
cookies['NOW'] = request.cookies['NOW']
cookies['xsrf'] = request.cookies['xsrf']

# Getting a NEXT cookie
url = "https://classic.nordnet.dk/oauth2/authorize?client_id=NEXT&response_type=code&redirect_uri=https://www.nordnet.dk/oauth2/"
request = requests.get(url, cookies=cookies)
cookies['NEXT'] = request.history[1].cookies['NEXT']

# LOOPS TO REQUEST HISTORICAL PRICES AT NORDNET AND MORNINGSTAR #

# Nordnet loop to get historical prices
for share in sharelist:
	# Nordnet stock identifier and market number must both exist
	if share[2]:
		url = "https://www.nordnet.dk/api/2/instruments/historical/prices/" + str(share[2])
		payload = {"from": startdate, "fields": "last"}
		data = requests.get(url, params=payload, cookies=cookies)
		jsondecode = data.json()
		
		# Sometimes the final date is returned twice. A list is created to check for duplicates.
		datelist = []
		
		for value in jsondecode[0]['prices']:
			price = str(value['last'])
			price = price.replace(".",",")
			date = datetime.fromtimestamp(value['time'] / 1000)
			date = datetime.strftime(date, '%Y-%m-%d')
			# Only adds a date if it has not been added before
			if date not in datelist:
				datelist.append(date)
				finalresult += '"' + date + '"' + ";" + '"' + price + '"' + ";" + '"' + share[0] + '"' + "\n"

# Morningstar loop to get historical prices			
for share in sharelist:
	# Only runs for one specific fund in this instance
	if share[0] == "Nordnet Superfonden Danmark":
		payload = {"id": share[1], "currencyId": "DKK", "idtype": "Morningstar", "frequency": "daily", "startDate": startdate, "outputType": "COMPACTJSON"}
		data = requests.get("http://tools.morningstar.dk/api/rest.svc/timeseries_price/nen6ere626", params=payload)
		jsondecode = data.json()
		
		for lists in jsondecode:
			price = str(lists[1])
			price = price.replace(".",",")
			date = datetime.fromtimestamp(lists[0] / 1000)
			date = datetime.strftime(date, '%Y-%m-%d')
			finalresult += '"' + date + '"' + ";" + '"' + price + '"' + ";" + '"' + share[0] + '"' + "\n"

# WRITE CSV OUTPUT TO FILE #			

with open("kurser.csv", "w", newline='', encoding='utf8') as fout:
	fout.write(finalresult)

Ting du ikke vil vide om kortspillet Krig

Hvis man tilfældigvis har et barn i 5-årsalderen, kan man spille kortspillet Krig. Kortene blandes og deles ligeligt mellem 2 spillere, hver spiller vender et kort fra sin bunke samtidig, højeste kort vender, hvis kortene er lige høje, er der krig. Det er så enkelt, at man lige så godt kunne få en computer til at spille det.

Derfor skrev jeg et lille program i Python, der kan simulere kortspillet.

Jeg opdagede et hul i reglerne: Der er ingen steder, der beskriver, hvad der sker, når en spiller ikke har kort nok til at deltage i en krig (eller en dobbelt-krig, tredobbelt-krig, osv.) Jeg besluttede, at hvis en spiller på et tidspunkt mangler kort til at kunne deltage, taber den spiller, der ikke har kort nok til at deltage. I den meget sjældne situation, at begge spillere ikke har nok kort til at deltage (en mange-mange-dobbelt-krig i starten af spillet), vinder den spiller, der har flest kort. Har begge spillere lige mange kort, bliver det uafgjort.

Jeg fik computeren til at spille 1 million spil Krig, og her er hvad jeg kan fortælle dig om Krig, som du ikke vil vide:

  • Det gennemsnitlige antal dueller i et spil krig er 177
  • Spillet med flest dueller havde 1.825 dueller
  • Spillet med færrest havde 4 dueller
  • Spilleren med den højeste sum af kort efter kortene blev blandet vandt 573.405 gange
  • Spilleren med den laveste sum af kort vandt 397.602 gange
  • I løbet af spillene blev der spillet:
    • Enkeltkrig: 12.366.762 gange
    • Dobbeltkrig: 888.024 gange
    • Trippelkrig: 60.727 gange
    • Firdobbeltkrig: 3.852 gange
    • Femdobbeltkrig: 206 gange
    • Seksdobbeltkrig: 10 gange

Her er programmet:

import random

krig1 = 0
krig2 = 0
krig3 = 0
krig4 = 0
krig5 = 0
krig6 = 0
krig7 = 0

number_of_plays_list = []
not_war = 0
war = 0

highest_deck_won = 0
highest_deck_lost = 0
equal_games = 0

i = 0
number_of_games = 1000000

while i <= number_of_games:
	number_of_plays_counter = 0
	deck = [2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,6,6,6,6,7,7,7,7,8,8,8,8,9,9,9,9,10,10,10,10,11,11,11,11,12,12,12,12,13,13,13,13,14,14,14,14]
	random.shuffle(deck)

	player_a_deck = deck[0:26]
	player_b_deck = deck[26:52]

	if sum(player_a_deck) > sum(player_b_deck):
		highest_deck = "a"
	elif sum(player_a_deck) < sum(player_b_deck):
		highest_deck = "b"
	else:
		highest_deck = "equal"

	while len(player_a_deck) > 0 and len(player_b_deck) > 0:
		number_of_plays_counter += 1
		if player_a_deck[0] > player_b_deck[0]:
			not_war += 1
			player_a_deck.append(player_a_deck[0])
			player_a_deck.append(player_b_deck[0])
			del player_a_deck[0]
			del player_b_deck[0]
		elif player_a_deck[0] < player_b_deck[0]:
			not_war += 1
			player_b_deck.append(player_b_deck[0])
			player_b_deck.append(player_a_deck[0])
			del player_a_deck[0]
			del player_b_deck[0]
		elif player_a_deck[0] == player_b_deck[0]:
			war += 1
			krig1 += 1
			if len(player_a_deck) >= 5 and len(player_b_deck) >= 5:
				if player_a_deck[4] > player_b_deck[4]:
					player_a_deck.extend(player_a_deck[0:5])
					player_a_deck.extend(player_b_deck[0:5])
					del player_a_deck[0:5]
					del player_b_deck[0:5]
				elif player_a_deck[4] < player_b_deck[4]:
					player_b_deck.extend(player_b_deck[0:5])
					player_b_deck.extend(player_a_deck[0:5])
					del player_a_deck[0:5]
					del player_b_deck[0:5]
				elif player_a_deck[4] == player_b_deck[4]:
					krig2 += 1
					if len(player_a_deck) >= 9 and len(player_b_deck) >= 9:			
						if player_a_deck[8] > player_b_deck[8]:
							player_a_deck.extend(player_a_deck[0:9])
							player_a_deck.extend(player_b_deck[0:9])
							del player_a_deck[0:9]
							del player_b_deck[0:9]
						elif player_a_deck[8] < player_b_deck[8]:
							player_b_deck.extend(player_b_deck[0:9])
							player_b_deck.extend(player_a_deck[0:9])
							del player_a_deck[0:9]
							del player_b_deck[0:9]	
						elif player_a_deck[8] == player_b_deck[8]:
							krig3 += 1
							if len(player_a_deck) >= 13 and len(player_b_deck) >= 13:
								if player_a_deck[12] > player_b_deck[12]:
									player_a_deck.extend(player_a_deck[0:13])
									player_a_deck.extend(player_b_deck[0:13])
									del player_a_deck[0:13]
									del player_b_deck[0:13]
								elif player_a_deck[12] < player_b_deck[12]:
									player_b_deck.extend(player_b_deck[0:13])
									player_b_deck.extend(player_a_deck[0:13])
									del player_a_deck[0:13]
									del player_b_deck[0:13]	
								elif player_a_deck[12] == player_b_deck[12]:
									krig4 += 1
									if len(player_a_deck) >= 17 and len(player_b_deck) >= 17:
										if player_a_deck[16] > player_b_deck[16]:
											player_a_deck.extend(player_a_deck[0:17])
											player_a_deck.extend(player_b_deck[0:17])
											del player_a_deck[0:17]
											del player_b_deck[0:17]
										elif player_a_deck[16] < player_b_deck[16]:
											player_b_deck.extend(player_b_deck[0:17])
											player_b_deck.extend(player_a_deck[0:17])
											del player_a_deck[0:17]
											del player_b_deck[0:17]
										elif player_a_deck[16] == player_b_deck[16]:
											krig5 += 1
											if len(player_a_deck) >= 21 and len(player_b_deck) >= 21:
												if player_a_deck[20] > player_b_deck[20]:
													player_a_deck.extend(player_a_deck[0:21])
													player_a_deck.extend(player_b_deck[0:21])
													del player_a_deck[0:21]
													del player_b_deck[0:21]
												elif player_a_deck[20] < player_b_deck[20]:
													player_b_deck.extend(player_b_deck[0:21])
													player_b_deck.extend(player_a_deck[0:21])
													del player_a_deck[0:21]
													del player_b_deck[0:21]										
												elif player_a_deck[20] == player_b_deck[20]:
													krig6 += 1
													if len(player_a_deck) >= 25 and len(player_b_deck) >= 25:
														if player_a_deck[24] > player_b_deck[24]:
															player_a_deck.extend(player_a_deck[0:25])
															player_a_deck.extend(player_b_deck[0:25])
															del player_a_deck[0:25]
															del player_b_deck[0:25]
														elif player_a_deck[24] < player_b_deck[24]:
															player_b_deck.extend(player_b_deck[0:25])
															player_b_deck.extend(player_a_deck[0:25])
															del player_a_deck[0:25]
															del player_b_deck[0:25]
														elif player_a_deck[24] == player_b_deck[24]:
															krig7 += 1
															break
													else:
														break
											else:
												break
									else:
										break
							else:
								break
					else:
						break
			else:
				break
	if len(player_a_deck) > len(player_b_deck):
		if highest_deck == "a":
			highest_deck_won += 1
		elif highest_deck == "b"	:
			highest_deck_lost += 1
	elif len(player_a_deck) < len(player_b_deck):
		if highest_deck == "a":
			highest_deck_lost += 1
		elif highest_deck == "b"	:
			highest_deck_won += 1
	else:
		equal_games += 1
	number_of_plays_list.append(number_of_plays_counter)
	i += 1
	print(i/number_of_games)

print("Der blev spillet {} spil".format(number_of_games))
print("Det gennemsnitlige antal dueller var {}".format(sum(number_of_plays_list)/len(number_of_plays_list)))
print("Det højeste antal dueller var {}".format(max(number_of_plays_list)))
print("Det laveste antal dueller var {}".format(min(number_of_plays_list)))
print("Den spiller med højest sum af kort vandt {} gange".format(highest_deck_won))
print("Den spiller med højest sum af kort tabte {} gange".format(highest_deck_lost))
print(krig1, krig2, krig3, krig4, krig5, krig6, krig7)
print(not_war, war)
print("Uafgjorte spil: {}".format(equal_games))

Hent valutadata fra Nordnet med Python

Nordnet har fået en ny hjemmeside med en markedsoversigt, hvor man blandt andet kan finde valutakurser. Dem ville jeg gerne have fat i til et Excelark 🙂

Jeg trykkede F12 i min browser for at undersøge, hvad der sker, når jeg klikker “Valutaer” på siden, og hvad der krævedes af cookies og klient-identifikation for at få data tilbage. (Du kan læse mere om metoden i mange af mine andre programmeringsindlæg.)

Det endte med dette program, der genererer en CSV-fil med den seneste valutakurs for en række almindelige valuter fra Nordnet:

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com
""" This program gets currency data from Nordnet.
Handy for exporting to Excel with as few manual steps as possible """
import requests 

# Creates a dictionary to use for cookies	
cookies = {}

# Sets NEXT cookie
url = 'https://www.nordnet.dk/markedet'
r = requests.get(url)
cookies['NEXT'] = r.cookies['NEXT']

# Requests currency data
headers = {'client-id': 'NEXT'}

# Gets currency data
url = 'https://www.nordnet.dk/api/2/instrument_search/query/indicator?entity_type=CURRENCY&apply_filters=market_overview_group%3DDK_GLOBAL_MO'
r = requests.get(url, cookies=cookies, headers=headers)
currencies = r.json()

# Generate CSV output of last value by looping through currencies
output = "navn;senest\n"
for currency in currencies['results']:
	name = currency['instrument_info']['name']
	price = str(currency['price_info']['last']['price'])	
	price = price.replace(".",",")
	output += name + ";" + price + "\n"

# Write CSV output to file #
with open("currency.csv", "w", encoding='utf8') as fout:
	fout.write(output)

Hent data om dit elektricitetsforbrug fra Ørsted med Python

Dette lille Python-program, genererer CSV-filer med dit elektricitetsforbrugsdata fra Ørsted (tidligere DONG), hvis du har en fjernaflæst måler. Du kan fx bruge programmet, hvis du har lyst til at holde øje med dit forbrug i et Excel-dokument.

Det færdige program

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com
""" This program gets your Ørsted electricity consumption data and saves it to CSV"""

import requests 
from datetime import datetime
from datetime import date
from datetime import timedelta

# USER ACCOUNT AND PERIOD DATA. SHOULD BE EDITED FOR YOUR NEEDS #

# User account credentials
user = ''	#E-mail address
password = ''		#Password

# Start date and date today used for consumption data
startdate = '2019-01-01'
today = date.today()
enddate = datetime.strftime(today, '%Y-%m-%d')

# API LOGIN #
url = 'https://api.obviux.dk/v2/authenticate'

headers = {
	'X-Customer-Ip': '0.0.0.0'
	}
	
credentials = {
	'customer': user,
	'password': password
	}

request = requests.post(url, headers=headers, json=credentials)
response = request.json()

# Save data for further API requests
external_id = response['external_id']
headers['Authorization'] = response['token']

# GET EAN #
url = 'https://api.obviux.dk/v2/deliveries'
request = requests.get(url, headers=headers)
response = request.json()

# Assuming only one Ørsted agreement, save ean value for further API queries for that agreement
# In case of more than one agreement, loop through list and save values instead
ean = response[0]['ean']

# API CALL #
base_url = 'https://capi.obviux.dk/v1/consumption/customer/'
id_ean = external_id + '/ean/' + ean + '/'

# There's limits on periods for each type of consumption data, so we create lists of periods
startdate_datetime = datetime.strptime(startdate, '%Y-%m-%d')
enddate_datetime = datetime.strptime(enddate, '%Y-%m-%d')

# The number of days for each period
days_for_type = {
	"hourly": 15,
	"daily": 370,
	"weekly": 420,
	"monthly": 1860,
	"yearly": 1830	
	}

def get_consumption_data(type):
	# Loop that returns a list of periods (dates) to request
	periods = []
	loop = True	
	start_of_periods = startdate_datetime
	days = days_for_type[type]

	while loop == True:
		start = datetime.strftime(start_of_periods, '%Y-%m-%d')
		end = start_of_periods + timedelta(days=days)
		# Loop ends and replaces end value if the calculated end date is later than what the user is looking for
		if end > enddate_datetime:
			end = enddate
			loop = False
		# Every weekly period ends on a Sunday and next starts on Monday
		elif type == "weekly" and not end.weekday() == 6:
			correction = end.weekday() + 1
			end -= timedelta(days=correction)
			end = datetime.strftime(end, '%Y-%m-%d')
			start_of_periods += timedelta(days=days + 1 - correction)
		# All months end on last day of month and next period starts the 1st of next month
		elif type == "monthly":
			day_in_month = end.day
			end -= timedelta(days=day_in_month)
			start_of_periods = end + timedelta(days=1)
			end = datetime.strftime(end, '%Y-%m-%d')
		# All yearly periods end on 31st December and next period starts next year January 1st
		elif type == "yearly":
			start_of_periods = datetime.strptime(str(end.year)+"-01-01", '%Y-%m-%d')
			end = str(end.year-1) + "-12-31"
		# Covers hourly and daily periods and weeks ending on Sunday
		else:
			end = datetime.strftime(end, '%Y-%m-%d')
			start_of_periods += timedelta(days=days + 1)
		periods.append([start, end])

	# API requests to cover requested periods
	url = base_url + id_ean + type
	responses = []
	for period in periods:
		params = {
			'from': period[0],
			'to': period[1]
			}
		request = requests.get(url, headers=headers, params=params)
		response = request.json()
		responses.append(response)

	# Write responses to CSV
	if type == "hourly":
		output = "date;hour;amount\n"
	elif type == "weekly":
		output = "date;weeknum;amount\n"
	else:
		output = "date;amount\n"
	for entry in responses:
		for data in entry['data']:
			if data['consumptions']:
				for d in data['consumptions']:
					start = datetime.strptime(d['start'], '%Y-%m-%dT%H:%M:%S.%f%z')
					start_in_timezone = datetime.astimezone(start).strftime("%Y-%m-%d %H:%M")
					amount = str(d['kWh']).replace(".",",")
					if type == "hourly":
						hour = datetime.astimezone(start).strftime("%H")
						output += start_in_timezone + ";" + hour + ";" + amount + "\n"
					elif type == "weekly":
						weeknum = datetime.astimezone(start).strftime("%V")
						output += start_in_timezone + ";" + weeknum + ";" + amount + "\n"
					else:
						output += start_in_timezone + ";" + amount + "\n"
		# Special case for year when change is done from manual to automatic consumption reading
		if type == "yearly":
			if entry['readings']:
				for ent in entry['readings']:
					if ent['readings']:
						for e in ent['readings']:
							if e['consumption']:
								date = e['startdate'] + " 00:00"
								amount = str(e['consumption']).replace(".",",")
								output += date + ";" + amount + "\n"
	filename = type + ".csv"
	with open(filename, "w", encoding='utf8') as fout:
		fout.write(output)

# Loop to cycle through all consumption endpoints
endpoints = ["hourly", "daily", "weekly", "monthly", "yearly"]
for endpoint in endpoints:
	get_consumption_data(endpoint)

Wallnots Twitter-bot: Version 2

Det er ikke mange dage siden, at Wallnot.dk‘s Twitter-bot gik i luften. Du kan finde botten her og mit indlæg om den her.

Robotten virkede sådan set fint nok, men pga. en begrænsning i Twitter’s API på 250 forespørgsler per måned, kunne jeg kun opdatere 4 gange i døgnet, og det er jo ret sjældent (det gamle program lavede 2 forespørgsler, hver gang det blev kørt, dvs. 30 dage * 4 opdateringer * 2 forespørgsler = 240 forespørgsler).

Heldigvis fandt jeg TWINT, et Python-modul der gør det nemt at hente data fra Twitter uden at gøre brug af Twitter’s API med dets kedelige begrænsninger.

Med genbrug af det meste af min gamle kode, har jeg nu lavet en version af robotten, der kan køre lige så tit, jeg har lyst til. Jeg har foreløbig sat den til at køre 4 gange i timen.

For sjov skyld har jeg også tilføjet en række venlige adjektiver om abonnenterne på Politiken og Zetland, som programmet vælger tilfældigt mellem, hver gang det lægger et link på Twitter.

Den færdige kode

Her er den færdige kode, hvis du er interesseret.

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com

import requests
from bs4 import BeautifulSoup
from datetime import datetime
from datetime import date
from datetime import timedelta
import json
import time
import random
import twint	# https://github.com/twintproject/twint
from TwitterAPI import TwitterAPI

# CONFIGURATION #
# List to store articles to post to Twitter
articlestopost = []

# Yesterday's date variable
yesterday = date.today() - timedelta(days=1)
since = yesterday.strftime("%Y-%m-%d")

# Twint configuration
c = twint.Config()
c.Hide_output = True
c.Store_object = True
c.Since = since

# API LOGIN
client_key = ''
client_secret = ''
access_token = ''
access_secret = ''
api = TwitterAPI(client_key, client_secret, access_token, access_secret)


# POLITIKEN #
# Run search
searchterm = "politiken.dk/del"
c.Search = searchterm
twint.run.Search(c)
tweets = twint.output.tweets_object

# Add urls in tweets to list and remove any duplicates from list
urllist = []
for tweet in tweets:
	for url in tweet.urls:
		if searchterm in url:
			urllist.append(url)

urllist = list(set(urllist))

# Only proces urls that were not in our last Twitter query
proceslist = []
with open("./pol_lastbatch.json", "r", encoding="utf8") as fin:
	lastbatch = list(json.load(fin))
	for url in urllist:
		if url not in lastbatch:
			proceslist.append(url)
# Save current query to use for next time
with open("./pol_lastbatch.json", "wt", encoding="utf8") as fout:
	lastbatch = json.dumps(urllist)
	fout.write(lastbatch)

# Request articles and get titles and dates and sort by dates
articlelist = []
titlecheck = []

for url in proceslist:
	try:
		data = requests.get(url)
		result = data.text
		if '"isAccessibleForFree": "True"' not in result:
			soup = BeautifulSoup(result, "lxml")
			# Finds titles and timestamps
			title = soup.find('meta', attrs={'property':'og:title'})
			title = title['content']
			timestamp = soup.find('meta', attrs={'property':'article:published_time'})
			timestamp = timestamp['content']
			dateofarticle = datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S%z')
			realurl = data.history[0].headers['Location']
			if title not in titlecheck:
				articlelist.append({"title": title, "url": realurl, "date": dateofarticle})
				titlecheck.append(title)			
	except Exception as e:
		print(url)
		print(e)
			
articlelist_sorted = sorted(articlelist, key=lambda k: k['date']) 

# Check if article is already posted and update list of posted articles
with open("./pol_published.json", "r", encoding="utf8") as fin:
	alreadypublished = list(json.load(fin))
	# File below used for paywall.py to update wallnot.dk
	with open("./pol_full_share_links.json", "r", encoding="utf8") as finalready:	
		alreadypublishedalready = list(json.load(finalready))
		for art in articlelist_sorted:
			url = art['url']
			token = url.index("?shareToken")
			url = url[:token]
			if url not in alreadypublished:
				alreadypublished.append(url)
				articlestopost.append(art)
				alreadypublishedalready.append(art['url'])
		# Save updated already published links
		with open("./pol_published.json", "wt", encoding="utf8") as fout:
			alreadypublishedjson = json.dumps(alreadypublished)
			fout.write(alreadypublishedjson)
		with open("./pol_full_share_links.json", "wt", encoding="utf8") as fout:
			alreadypublishedjson = json.dumps(alreadypublishedalready)
			fout.write(alreadypublishedjson)


# ZETLAND #
# Run search
searchterm = "zetland.dk/historie"
c.Search = searchterm
twint.run.Search(c)
tweets = twint.output.tweets_object

# Add urls in tweets to list and remove any duplicates from list
urllist = []
for tweet in tweets:
	for url in tweet.urls:
		if searchterm in url:
			urllist.append(url)

urllist = list(set(urllist))

# Only proces urls that were not in our last Twitter query
proceslist = []
with open("./zet_lastbatch.json", "r", encoding="utf8") as fin:
	lastbatch = list(json.load(fin))
	for url in urllist:
		if url not in lastbatch:
			proceslist.append(url)
# Save current query to use for next time
with open("./zet_lastbatch.json", "wt", encoding="utf8") as fout:
	lastbatch = json.dumps(urllist)
	fout.write(lastbatch)

# Request articles and get titles and dates and sort by dates
articlelist = []
titlecheck = []

for url in proceslist:
	try:
		data = requests.get(url)
		result = data.text
		soup = BeautifulSoup(result, "lxml")
		title = soup.find('meta', attrs={'property':'og:title'})
		title = title['content']
		timestamp = soup.find('meta', attrs={'property':'article:published_time'})
		timestamp = timestamp['content']
		timestamp = timestamp[:timestamp.find("+")]
		dateofarticle = datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S.%f')
		if title not in titlecheck:
			articlelist.append({"title": title, "url": url, "date": dateofarticle})
			titlecheck.append(title)
	except Exception as e:
		print(url)
		print(e)
			
articlelist_sorted = sorted(articlelist, key=lambda k: k['date']) 

# Check if article is already posted and update list of posted articles
with open("./zet_published.json", "r", encoding="utf8") as fin:
	alreadypublished = list(json.load(fin))
	for art in articlelist_sorted:
		title = art['title']
		if title not in alreadypublished:
			alreadypublished.append(title)
			articlestopost.append(art)
	# Save updated already published links
	with open("./zet_published.json", "wt", encoding="utf8") as fout:
		alreadypublishedjson = json.dumps(alreadypublished, ensure_ascii=False)
		fout.write(alreadypublishedjson)


# POST TO TWITTER #
friendlyterms = ["flink","rar","gavmild","velinformeret","intelligent","sød","afholdt","bedårende","betagende","folkekær","godhjertet","henrivende","smagfuld","tækkelig","hjertensgod","graciøs","galant","tiltalende","prægtig","kær","godartet","human","indtagende","fortryllende","nydelig","venlig","udsøgt","klog","kompetent","dygtig","ejegod","afholdt","omsorgsfuld","elskværdig","prægtig","skattet","feteret"]
enjoyterms = ["God fornøjelse!", "Nyd den!", "Enjoy!", "God læsning!", "Interessant!", "Spændende!", "Vidunderligt!", "Fantastisk!", "Velsignet!", "Glæd dig!", "Læs den!", "Godt arbejde!", "Wauv!"]

if articlestopost:
	for art in articlestopost:
		if "zetland" in art['url']:
			medium = "Zetland"
		else:
			medium = "Politiken"
		friendlyterm = random.choice(friendlyterms)
		enjoyterm = random.choice(enjoyterms)
		status = "En " + friendlyterm + " abonnent på " + medium + " har delt en artikel. " + enjoyterm + " " + art['url']
		r = api.request('statuses/update', {'status': status})
		time.sleep(15)

Wallnots nye Twitter-robot

Opdatering: Jeg har lavet en ny, forbedret udgave af robottten. Læs om den her.

Både Zetland og Politiken har en feature, hvor abonnenter kan dele betalingsartikler med venner, bekendte og offentligheden. Artiklen får en unik URL, som låser op for betalingsmuren og lader alle og enhver læse artiklen.

Jeg tænkte at Wallnot – min hjemmeside med artikler, der ikke er bag betalingsmur – trængte til at være mere til stede på sociale medier.

Derfor har jeg lavet en robot, der gennemsøger Twitter for delte artikler fra Zetland og Politiken – og deler links’ne som tweets. Robotten opdater ca. på klokkeslettene 8.25, 12.25, 16.25 og 20.25. Det ville være skønt at kunne opdatere flere gange i døgnet, men så vil Twitter have penge.

Du finder Wallnots nye Twitter-robot her: https://twitter.com/wallnot_dk

Sådan ser det ud, når Wallnots robot tweeter.

Jeg brugte Python og modulet TwitterAPI. Hvis du selv vil køre programmet, skal du lave en udviklerkonto og en app hos Twitter. Se
https://developer.twitter.com/.

Her er det færdige program.

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com
# THIS PROGRAM POSTS NEW SHARED ARTICLES FROM ZETLAND.DK AND POLITIKEN.DK TO TWITTER

import requests
from bs4 import BeautifulSoup
from datetime import datetime
import json
import time
from nested_lookup import nested_lookup
from TwitterAPI import TwitterAPI

articlestopost = []

# API LOGIN - INSERT YOUR OWN VALUES HERE
client_key = ''
client_secret = ''
access_token = ''
access_secret = ''
api = TwitterAPI(client_key, client_secret, access_token, access_secret)


# POLITIKEN.DK SEARCH #
SEARCH_TERM = 'url:"politiken.dk/del/"'
PRODUCT = '30day'
LABEL = 'prod'

r = api.request('tweets/search/%s/:%s' % (PRODUCT, LABEL), 
                {'query':SEARCH_TERM})

tweet_data = json.loads(r.text)
prettyjson = json.dumps(tweet_data, ensure_ascii=False, indent=4) # Only needed for debugging to pretify json

# Looks for all instances of expanded_url in json	
linklist = list(set(nested_lookup('expanded_url', tweet_data)))

urllist = []
for link in linklist:
	if "politiken.dk/del" in link:
		urllist.append(link)

# Request articles and get titles and dates and sort by dates
articlelist = []
titlecheck = []

for url in urllist:
	try:
		data = requests.get(url)
		result = data.text
		if '"isAccessibleForFree": "True"' not in result:
			soup = BeautifulSoup(result, "lxml")
			# Finds titles and timestamps
			title = soup.find('meta', attrs={'property':'og:title'})
			title = title['content']
			timestamp = soup.find('meta', attrs={'property':'article:published_time'})
			timestamp = timestamp['content']
			dateofarticle = datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S%z')
			realurl = data.history[0].headers['Location']
			if title not in titlecheck:
				articlelist.append({"title": title, "url": realurl, "date": dateofarticle})
				titlecheck.append(title)			
	except:
		print(url)
			
articlelist_sorted = sorted(articlelist, key=lambda k: k['date'], reverse=True) 

# Check if article is already posted and update list of posted articles
with open("./pol_published.json", "r", encoding="utf8") as fin:
	alreadypublished = list(json.load(fin))
	for art in articlelist_sorted:
		url = art['url']
		token = url.index("?shareToken")
		url = url[:token]
		if url not in alreadypublished:
			alreadypublished.append(url)
			articlestopost.append(art)
	# Save updated already published links
	with open("./pol_published.json", "wt", encoding="utf8") as fout:
		alreadypublishedjson = json.dumps(alreadypublished)
		fout.write(alreadypublishedjson)

# ZETLAND.DK SEARCH #
SEARCH_TERM = 'url:"zetland.dk/historie"'
PRODUCT = '30day'
LABEL = 'prod'

r = api.request('tweets/search/%s/:%s' % (PRODUCT, LABEL), 
                {'query':SEARCH_TERM})

tweet_data = json.loads(r.text)
prettyjson = json.dumps(tweet_data, ensure_ascii=False, indent=4) # Only needed for debugging to pretify json

# Looks for all instances of expanded_url in json	
linklist = list(set(nested_lookup('expanded_url', tweet_data)))

urllist = []
for link in linklist:
	if "zetland.dk/historie" in link:
		urllist.append(link)

# Request articles and get titles and dates and sort by dates
articlelist = []
titlecheck = []

for url in urllist:
	try:
		data = requests.get(url)
		result = data.text

		# Soup site and create a dictionary of links and their titles and dates
		articledict = {}
		soup = BeautifulSoup(result, "lxml")

		title = soup.find('meta', attrs={'property':'og:title'})
		title = title['content']
		
		timestamp = soup.find('meta', attrs={'property':'article:published_time'})
		timestamp = timestamp['content']
		timestamp = timestamp[:timestamp.find("+")]
		dateofarticle = datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S.%f')
		
		if title not in titlecheck:
			articlelist.append({"title": title, "url": url, "date": dateofarticle})
			titlecheck.append(title)
	except:
		print(url)
			
articlelist_sorted = sorted(articlelist, key=lambda k: k['date'], reverse=True) 

# Check if article is already posted and update list of posted articles
with open("./zet_published.json", "r", encoding="utf8") as fin:
	alreadypublished = list(json.load(fin))
	for art in articlelist_sorted:
		title = art['title']
		if title not in alreadypublished:
			alreadypublished.append(title)
			articlestopost.append(art)
	# Save updated already published links
	with open("./zet_published.json", "wt", encoding="utf8") as fout:
		alreadypublishedjson = json.dumps(alreadypublished, ensure_ascii=False)
		fout.write(alreadypublishedjson)


# POST TO TWITTER #
if articlestopost:
	for art in articlestopost:
		if "zetland" in art['url']:
			medium = "Zetland"
		else:
			medium = "Politiken"
		status = "En flink abonnent på " + medium + " har delt en betalingsartikel. God fornøjelse! " + art['url']
		r = api.request('statuses/update', {'status': status})
		time.sleep(5)