Categories
blandet

Sådan laver du en gratis Weekendavisen

Nu afslører jeg lige noget jeg opdagede, da jeg lavede https://wallnot.dk (som kun offentliggør gratisartikler): Weekendavisen er gratis!

Et lille udsnit af en betalingsartikel fra Weekendavisen.dk som flot struktureret JSON.

Selv om https://www.weekendavisen.dk/ ligner en typisk dansk netavis med gratis-artikler og paywall-artikler i én pærevælling, offentliggør Weekendavisen faktisk hele sit indhold. De ved det sikkert ikke selv – men udvikleren hos det smarte webbureau, der har udviklet deres side, ved det med sikkerhed.

Avisens oversigt over ugens avis – denne uge er det https://www.weekendavisen.dk/2019-51/oversigt – indeholder en fuldt offentlig JSON-streng med hele avisens indhold: fuld tekst, links til artikeloplæsninger, hele dynen.

Det er ret amatøragtigt.

Du ser det ikke i din browser når du besøger siden, men det er der.

Jeg har lavet et lille Python-script, der genererer din egen personlige Weekendavisen for den aktuelle uge i en fil, der hedder index.html. Det ser ikke særligt godt ud, der er kun de fulde tekster, ikke billeder og links til oplæsning – du kan selv arbejde videre med JSON-strengen, hvis du vil have det til at se flot ud.

Det kan være, jeg ødelægger det for mig selv, for hvis Weekendavisen retter fejlen, bliver jeg formentlig nødt til at omkode den del af wallnot.dk, der viser gratis Weekendavisen-artikler.

God fornøjelse med din gratis Weekendavisen.

# The Danish newspaper Weekendavisen.dk publishes all articles - even those supposedly behind a paywall - as json on their homepage.
# This small script creates an index.html file to read all articles from the current edition.

import requests
from bs4 import BeautifulSoup
import json

def weekendavisen():
	# Request front page
	data = requests.get("https://weekendavisen.dk")
	result = data.text

	# Soup site and create a list of links and their titles
	soup = BeautifulSoup(result, "html.parser")

	for a in soup.find_all('a'):
		if "/oversigt" in a['href']:
			overviewurl = a['href']

	edition = overviewurl[overviewurl.find(".dk/") + 4:overviewurl.find(".dk/") + 11]
	request = "https://weekendavisen.dk/" + edition + "/oversigt"

	# Request site and soup it
	data = requests.get(request)
	result = requests.utils.get_unicode_from_response(data) 
		
	soup = BeautifulSoup(result, "html.parser")
	content = soup.find('script', attrs={'class':'js-react-on-rails-component', 'data-component-name':"IndexPage"})
	jsonobject = content.string
		
	# Create json object
	jsondecode = json.loads(jsonobject)
	
	# Iterate through articles and articles to dictionary
	articlelist = []
	
	for section in jsondecode["sections"]:
		for item in section["items"]:
			summary = item["summary"]
			summary_output = '<b>' + summary[:summary.find(".") + 1] + '</b> ' + summary[summary.find(".") + 1:] + ''
			title = item["title"]
			title_output = '<h1><big>' + title + '</big></h1>'
			if item["type"] == "newsarticleplus":
				article = item["body"] + item["paidBody"]
			else:
				article = item["body"]
			output = summary_output + title_output + article

			articlelist.append(output)

	week_linkstr = ""
	for article in articlelist:
		week_linkstr += article
			
	return week_linkstr	

def htmlgenerator():
	htmlstart = '''<!DOCTYPE HTML>
	<head>
	<meta charset="utf-8"/>

	<title>Weekendavisen</title>

	</head>
	<body>'''
	
	htmlend = '</body></html>'
	
	finalhtml = htmlstart + week_links + htmlend

	# Saves to disc
	with open("./index.html", "wt", encoding="utf8") as fout:
		fout.write(finalhtml)	
			
week_links = weekendavisen()
htmlgenerator()
Categories
blandet

Kortlinkværktøj med Django/Python

Der er nok ikke mange mennesker efterhånden, der ikke har deres egen kortlinkservice. En af de mest kendte er https://bitly.com/.

Som en øvelse har jeg lavet kortlinkservicen https://wallnot.dk/e/link. Linkene bliver godt nok ikke specielt korte, men indtil videre sparer jeg udgiften til et selvstændigt domænenavn. Det er ikke fordi, der mangler muligheder andre steder.

At lave et kortlink-værktøj i Django er overraskende nemt.

Her er en lille opskrift.

Opskrift på kortlinkværktøj

Efter at have oprettet mit projekt (se evt. guide på https://www.djangoproject.com/start/) går jeg i gang.

Jeg starter med min datamodel i models.py. Hvert link har en destination (det lange link), et kort link og et tidsstempel. Destinationen er en URL, det korte link er et antal tegn og tidsstemplet er – et tidsstempel:

from django.db import models
from django.utils import timezone

class Link(models.Model):
    destination = models.URLField(max_length=500)
    shortlink = models.CharField(max_length=6, unique=True)
    date = models.DateTimeField(default=timezone.now, editable=False)

Jeg ved, at jeg skal bruge en formular. Den opretter jeg i forms.py. Her bruger jeg en type formular, der kaldes ModelForm. Django sørger for, at valideringsreglerne følger samme type data, som jeg har i min bagvedliggende datamodel:

from django.forms import ModelForm, URLInput
from .models import Link

class LinkForm(ModelForm):
    class Meta:
        model = Link
        fields = ['destination']
        widgets = {
            'destination': URLInput(attrs={'placeholder': 'Indsæt link'}),
        }

Logikkerne bag de enkelte visninger i Django laves i views.py. Jeg har to forskellige visninger. Én visning som jeg bruger til at vise min forside, hvor jeg både viser min formular til indtastning af links og det korte link (index). Én visning, som aktiveres når brugeren besøger et kort link (redirect).

Endelig har jeg en funktion, som jeg bruger til at generere selve de korte links.

Jeg har kommenteret koden en masse, så jeg håber den er til at følge med i:

from django.shortcuts import render
from django.http import HttpRequest, HttpResponseRedirect
from .models import Link
from .forms import LinkForm
import hashlib
import bcrypt

# Function to create a random hash to use as short link address
def create_shortlink(destination):
	salt = bcrypt.gensalt().decode()	# Random salt
	destination = destination+salt		# Salt added to destination URL
	hash = hashlib.md5(destination.encode()).hexdigest() # Hashed to alphanumeric string
	return hash[:6]	# First 6 characters of that string 

# Front page with a form to enter destination address. Short URL returned.
def index(request):
	form = LinkForm()	# Loads form
	url = 'https://wallnot.dk/e/link/'	# site url
	# If a destination is submitted, a short link is returned
	if request.method == 'POST':
		form = LinkForm(request.POST) # Form instance with submitted data
		# Check whether submitted data is valid
		if form.is_valid():
			destination = form.cleaned_data['destination'] # Submitted destination
			# If destination is already in database, return short link for destination from database
			try:
				link = Link.objects.get(destination=destination)
				sharelink = url + link.shortlink # Creates full URL using page URL and hash
			# If destination is not in database, create a new short link
			except:
				# Loop to create a unique hash value for short link
				unique_link = False
				while unique_link == False:
					hash = create_shortlink(destination)	# Return hash
					# First we check whether the hash is a duplicate
					try:
						Link.objects.get(shortlink=hash)	# Check whether hash is used
					# If not a duplicate, an error is thrown, and we can save the hash
					except:
						link = form.save(commit=False)	# Prepare to save form destination data and hash
						link.shortlink = hash	# Sets short link to hash value
						link.save()	# Saves destination and short link to database
						sharelink = url + link.shortlink # Creates full URL using page URL and hash
						unique_link = True	# If check causes error, hash is unused, exit loop
			context = {'sharelink': sharelink, 'form': form}	# Dictionary with variables used in template
			return render(request, 'links/index.html', context)
		# If form is invalid, just renders page.
		else:
			context = {'form': form}
			return render(request, 'links/index.html', context)
	# Render page with form before user has submitted
	context = {'form': form}
	return render(request, 'links/index.html', context)

# Short link redirect to destination URL
def redirect(request, shortlink):
	# Query the database for short link, if there is a hit, redirect to destination URL
	try:
		link = Link.objects.get(shortlink=shortlink)
		return HttpResponseRedirect(link.destination)
	# An error means the short link doesn't exist, so the front page template is shown with an error variable
	except:
		error = True
		context = {'error': error}
		return render(request, 'links/index.html', context)

For at kunne servere siderne, har jeg urls.py, der fortæller Django hvordan en indtastet URL af brugeren skal pege på funktioner i views.py:

from django.urls import path
from . import views

urlpatterns = [
    path('', views.index, name='index'),
    path('<shortlink>', views.redirect, name='redirect'),
]

Og endelig har jeg index.html, som er den skabelon, som min side genereres på baggrund af. Hvis du ikke har prøvet Django før, så læg mærke til alt det, der står i tuborgklammer ({}). De bruges dels til simple funktioner (fx if-funktioner, dels til at indsætte variable fra views.py i den side, der genereres.

Hvis du lægger mærke til funktionerne, bruger jeg if-funktionerne til at nøjes med en skabelon, uanset hvilken situation brugeren er havnet i, sådan at indholdet fx er anderledes, når brugeren har lavet en fejl i udfyldelsen af formularen, end når brugeren ikke har udfyldt formularen endnu.

Der er også et lille javascript i filen, der sørger for at brugeren kan kopiere det korte link til sin udklipsholder.

<!doctype html>
<html lang="da">
  <head>
    <!-- Required meta tags -->
	<title>Korte links</title>
	<meta name="description" content="Skønne korte links">
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
	<link rel="apple-touch-icon" sizes="180x180" href="/apple-touch-icon.png">
	<link rel="icon" type="image/png" sizes="32x32" href="/favicon-32x32.png">
	<link rel="icon" type="image/png" sizes="16x16" href="/favicon-16x16.png">
	<link rel="manifest" href="/site.webmanifest">
	<link rel="mask-icon" href="/safari-pinned-tab.svg" color="#5bbad5">
	<meta name="msapplication-TileColor" content="#ffc40d">
	<meta name="theme-color" content="#ffffff">

	<style>
	body { 
		font-family: -apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";
		text-align: center;
		box-sizing: border-box;	
	}

	h1 {
		margin-top: 0;
		font-size: 4.0rem;
		font-weight: 300;
		line-height: 1.2;
		margin-bottom: 1.5rem;
	}

	h2 {
		margin-top: 1.5rem;
		font-size: 2.5rem;
		font-weight: 300;
		line-height: 1.2;
		margin-bottom: 1.5rem;
	}

	input {
		width: 60%;
		line-height: 1.2;
		font-size: 1.0rem;
		height: 1.5rem;
		padding: 10px;
	}

	button {
		width: 50%;
		border: 1px solid transparent;
		padding: .375rem .75rem;
		font-size: 1rem;
		line-height: 1.8;
		height: 2.5rem;
		border-radius: .25rem;
		color: #fff;
		background-color: #28a745;
		border-color: #28a745;
	}

	button:focus {
		box-shadow: 0 0 0 0.2rem rgba(72,180,97,.5)
	}

	button:hover {
		background-color: #218838;
		border-color: #1e7e34;
	}
	
	.footer {
		position: fixed;
		left: 0;
		bottom: 0;
		width: 100%;
		background-color: #f1f1f1;
		color: black;
	}	
	</style>
  </head>
  <body>

<h1>Lav et kort link</h1>

{% if form %}
	<form method="post">
	{% csrf_token %}
	<p>{{ form.destination }}</p>
	<p><button type="submit" value="Lav et kort link">Lav et kort link</button></p>
	</form>

	{% if form.destination.errors %}
		<h2>Tast et gyldigt link!</h2>
		<p><em>Du har tastet et ugyldigt link. Prøv igen med et gyldigt link med http://, https://, ftp:// eller ftps:// foran.</em></p>
	{% endif %}

	{% if request.method == "POST" and not form.destination.errors %}
		<h2>Her er dit link:</h2>
		<p><a href="{{ sharelink }}">{{ sharelink }}</a></p>
		<button class="copy">Kopier link</button>
	{% endif %} 

{% endif %}

{% if error %}
<h2>Har du tastet forkert?</h2>
<p><em>Du har prøvet at bruge et kort link. Desværre er det link, du har tastet, ikke registreret. Måske er du kommet til at taste forkert?</em></p>
<p><a href="{% url 'index' %}">Til forsiden</a>
{% endif %} 

<div class="footer">
  <p>Lav relativt korte links på wallnot.dk. Gratis og fri for annoncer og overvågning.</p>
</div>


<script>
function fallbackCopyTextToClipboard(text) {
  var textArea = document.createElement("textarea");
  textArea.value = text;
  document.body.appendChild(textArea);
  textArea.focus();
  textArea.select();

  try {
    var successful = document.execCommand("copy");
    var msg = successful ? "successful" : "unsuccessful";
    console.log("Fallback: Kopiering gik fint " + msg);
  } catch (err) {
    console.error("Fallback: Kunne ikke kopiere", err);
  }
  document.body.removeChild(textArea);
}

function copyTextToClipboard(text) {
  if (!navigator.clipboard) {
    fallbackCopyTextToClipboard(text);
    return;
  }
  navigator.clipboard.writeText(text).then(function() {
    console.log('Kopiering gik fint');
  }, function(err) {
    console.error('Kunne ikke kopiere', err);
  });
}

var copy = document.querySelector('.copy');

copy.addEventListener('click', function(event) {
  copyTextToClipboard('{{ sharelink }}');
});
</script>

</body>
</html>
Categories
blandet

Pakkesporing fra flere forskellige transportører

For tiden øver jeg mig i at bruge Django – et værktøj til at lave webapplikationer i Python. Det er vildt smart.

Det tog et par timer at få https://wallnot.dk/e/pak/ i luften, men så er der heller ikke gjort noget ud af brugerfladen og det bagvedliggende kunne helt sikkert også gøres smartere. Siden kan bruges til at spore pakker til levering fra flere forskellige transportører (PostNord, GLS, DAO).

Hvis du har pakker på vej fra andre transportører og vil dele pakkenumrene med mig, er jeg interesseret.

Categories
blandet

Hiv dine transaktioner ud af det nye Nordnet

Her er en opdatering af mit gamle program til at hente transaktioner ud fra Nordnet. Det er opdateret til at fungere med Nordnets nye design og API:

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com
""" This program logs into a Nordnet account and extracts transactions as a csv file.
Handy for exporting to Excel with as few manual steps as possible """

import requests 
from datetime import datetime
from datetime import date

# USER ACCOUNT, PORTFOLIO AND PERIOD DATA. SHOULD BE EDITED FOR YOUR NEEDS #

# Nordnet user account credentials and accounts/portfolios names (choose yourself) and numbers.
# To get account numbers go to https://www.nordnet.dk/transaktioner and change
# between accounts. The number after "accid=" in the new URL is your account number.
# If you have only one account, your account number is 1.
user = ''
password = ''
accounts = {
	"Frie midler: Nordnet": "1",
	"Ratepension": "3",
}

# Start date (start of period for transactions) and date today used for extraction of transactions
startdate = '2013-01-01'
today = date.today()
enddate = datetime.strftime(today, '%Y-%m-%d')

# Manual data lines. These can be used if you have portfolios elsewhere that you would
# like to add manually to the data set. If no manual data the variable manualdataexists
# should be set to False
manualdataexists = True
manualdata = """
Id;Bogføringsdag;Handelsdag;Valørdag;Transaktionstype;Værdipapirer;Instrumenttyp;ISIN;Antal;Kurs;Rente;Afgifter;Beløb;Valuta;Indkøbsværdi;Resultat;Totalt antal;Saldo;Vekslingskurs;Transaktionstekst;Makuleringsdato;Verifikations-/Notanummer;Depot
;30-09-2013;30-09-2013;30-09-2013;KØBT;Obligationer 3,5%;Obligationer;;72000;;;;-69.891,54;DKK;;;;;;;;;;Frie midler: Finansbanken
"""

# CREATE VARIABLES FOR LATER USE. #

# Creates a dictionary to use with cookies	
cookies = {}

# A variable to store transactions before saving to csv
transactions = ""

# LOGIN TO NORDNET #

# First part of cookie setting prior to login
url = 'https://classic.nordnet.dk/mux/login/start.html?cmpi=start-loggain&amp;state=signin'
request = requests.get(url)
cookies['LOL'] = request.cookies['LOL']
cookies['TUX-COOKIE'] = request.cookies['TUX-COOKIE']

# Second part of cookie setting prior to login
url = 'https://classic.nordnet.dk/api/2/login/anonymous'
request = requests.post(url)
cookies['NOW'] = request.cookies['NOW']

# Actual login that gets us cookies required for later use
url = 'https://classic.nordnet.dk/api/2/authentication/basic/login'
request = requests.post(url,cookies=cookies, data = {'username': user, 'password': password})
cookies['NOW'] = request.cookies['NOW']
cookies['xsrf'] = request.cookies['xsrf']

# Getting a NEXT cookie
url = 'https://classic.nordnet.dk/oauth2/authorize?client_id=NEXT&amp;response_type=code&amp;redirect_uri=https://www.nordnet.dk/oauth2/'
request = requests.get(url, cookies=cookies)
cookies['NEXT'] = request.history[1].cookies['NEXT']

# GET TRANSACTION DATA #

# Payload and url for transaction requests
payload = {
'locale': 'da-DK',
'from': startdate,
'to': enddate,
}

url = "https://www.nordnet.dk/mediaapi/transaction/csv/filtered"

firstaccount = True
for portfolioname, id in accounts.items():
	payload['account_id'] = id
	data = requests.get(url, params=payload, cookies=cookies)
	result = data.content.decode('utf-16')
	result = result.replace('\t',';')

	result = result.splitlines()
	
	firstline = True
	for line in result:
		# For first account and first line, we use headers and add an additional column
		if line and firstline == True and firstaccount == True:
			transactions += line + ';' + "Depot" + "\n"
			firstaccount = False
			firstline = False
		# First lines of additional accounts are discarded
		elif line and firstline == True and firstaccount == False:
			firstline = False
		# Content lines are added
		elif line and firstline == False:
			# Fix because Nordnet sometimes adds one empty column too many
			if line.count(';') == 23:
				line = line.replace('; ',' ')
			transactions += line + ';' + portfolioname + "\n"

# ADD MANUAL LINES IF ANY #
if manualdataexists == True:
	manualdata = manualdata.split("\n",2)[2]
	transactions += manualdata

# Saves CSV
with open("transactions.csv", "w", encoding='utf8') as fout:
	fout.write(transactions)

Categories
blandet

Hent kurser – historiske og realtid – på dine værdipapirer i det nye Nordnet

Nordnet har fået nyt design og ny API. Det betyder, at der skal lidt flere krumspring til end tidligere, når man skal have fat på kurser på sine værdipapirer.

Her er et program i Python, der kan hjælpe dig. Det kræver login til Nordnet.

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com
""" This program extracts historical stock prices from Nordnet (and Morningstar as a fallback) """

import requests
from datetime import datetime
from datetime import date
import os

# Nordnet user account credentials
user = ''
password = ''

# DATE AND STOCK DATA. SHOULD BE EDITED FOR YOUR NEEDS #

# Start date (start of historical price period)
startdate = '2013-01-01'

# List of shares to look up prices for.
# Format is: Name, Morningstar id, Nordnet stock identifier
# See e.g. https://www.nordnet.dk/markedet/aktiekurser/16256554-novo-nordisk-b
# (identifier is 16256554)
# All shares must have a name (whatever you like). To get prices they must
# either have a Nordnet identifier or a Morningstar id
sharelist = [
["Maj Invest Pension","F0GBR064UH",16099877],
["Novo Nordisk B A/S","0P0000A5BQ",16256554],
["Nordnet Superfonden Danmark","F00000TH8X",""],
]

# CREATE VARIABLES FOR LATER USE. #

# A variable to store historical prices before saving to csv	
finalresult = ""
finalresult += '"date";"price";"instrument"' + '\n'

# A cookie dictionary for storing cookies
cookies = {}

# NORDNET LOGIN #

# First part of cookie setting prior to login
url = 'https://classic.nordnet.dk/mux/login/start.html?cmpi=start-loggain&amp;state=signin'
request = requests.get(url)
cookies['LOL'] = request.cookies['LOL']
cookies['TUX-COOKIE'] = request.cookies['TUX-COOKIE']

# Second part of cookie setting prior to login
url = 'https://classic.nordnet.dk/api/2/login/anonymous'
request = requests.post(url, cookies=cookies)
cookies['NOW'] = request.cookies['NOW']

# Actual login that gets us cookies required for later use
url = "https://classic.nordnet.dk/api/2/authentication/basic/login"
request = requests.post(url,cookies=cookies, data = {'username': user, 'password': password})
cookies['NOW'] = request.cookies['NOW']
cookies['xsrf'] = request.cookies['xsrf']

# Getting a NEXT cookie
url = "https://classic.nordnet.dk/oauth2/authorize?client_id=NEXT&amp;response_type=code&amp;redirect_uri=https://www.nordnet.dk/oauth2/"
request = requests.get(url, cookies=cookies)
cookies['NEXT'] = request.history[1].cookies['NEXT']

# LOOPS TO REQUEST HISTORICAL PRICES AT NORDNET AND MORNINGSTAR #

# Nordnet loop to get historical prices
for share in sharelist:
	# Nordnet stock identifier and market number must both exist
	if share[2]:
		url = "https://www.nordnet.dk/api/2/instruments/historical/prices/" + str(share[2])
		payload = {"from": startdate, "fields": "last"}
		data = requests.get(url, params=payload, cookies=cookies)
		jsondecode = data.json()
		
		# Sometimes the final date is returned twice. A list is created to check for duplicates.
		datelist = []
		
		for value in jsondecode[0]['prices']:
			price = str(value['last'])
			price = price.replace(".",",")
			date = datetime.fromtimestamp(value['time'] / 1000)
			date = datetime.strftime(date, '%Y-%m-%d')
			# Only adds a date if it has not been added before
			if date not in datelist:
				datelist.append(date)
				finalresult += '"' + date + '"' + ";" + '"' + price + '"' + ";" + '"' + share[0] + '"' + "\n"

# Morningstar loop to get historical prices			
for share in sharelist:
	# Only runs for one specific fund in this instance
	if share[0] == "Nordnet Superfonden Danmark":
		payload = {"id": share[1], "currencyId": "DKK", "idtype": "Morningstar", "frequency": "daily", "startDate": startdate, "outputType": "COMPACTJSON"}
		data = requests.get("http://tools.morningstar.dk/api/rest.svc/timeseries_price/nen6ere626", params=payload)
		jsondecode = data.json()
		
		for lists in jsondecode:
			price = str(lists[1])
			price = price.replace(".",",")
			date = datetime.fromtimestamp(lists[0] / 1000)
			date = datetime.strftime(date, '%Y-%m-%d')
			finalresult += '"' + date + '"' + ";" + '"' + price + '"' + ";" + '"' + share[0] + '"' + "\n"

# WRITE CSV OUTPUT TO FILE #			

with open("kurser.csv", "w", newline='', encoding='utf8') as fout:
	fout.write(finalresult)

Categories
blandet

Ting du ikke vil vide om kortspillet Krig

Hvis man tilfældigvis har et barn i 5-årsalderen, kan man spille kortspillet Krig. Kortene blandes og deles ligeligt mellem 2 spillere, hver spiller vender et kort fra sin bunke samtidig, højeste kort vender, hvis kortene er lige høje, er der krig. Det er så enkelt, at man lige så godt kunne få en computer til at spille det.

Derfor skrev jeg et lille program i Python, der kan simulere kortspillet.

Jeg opdagede et hul i reglerne: Der er ingen steder, der beskriver, hvad der sker, når en spiller ikke har kort nok til at deltage i en krig (eller en dobbelt-krig, tredobbelt-krig, osv.) Jeg besluttede, at hvis en spiller på et tidspunkt mangler kort til at kunne deltage, taber den spiller, der ikke har kort nok til at deltage. I den meget sjældne situation, at begge spillere ikke har nok kort til at deltage (en mange-mange-dobbelt-krig i starten af spillet), vinder den spiller, der har flest kort. Har begge spillere lige mange kort, bliver det uafgjort.

Jeg fik computeren til at spille 1 million spil Krig, og her er hvad jeg kan fortælle dig om Krig, som du ikke vil vide:

  • Det gennemsnitlige antal dueller i et spil krig er 177
  • Spillet med flest dueller havde 1.825 dueller
  • Spillet med færrest havde 4 dueller
  • Spilleren med den højeste sum af kort efter kortene blev blandet vandt 573.405 gange
  • Spilleren med den laveste sum af kort vandt 397.602 gange
  • I løbet af spillene blev der spillet:
    • Enkeltkrig: 12.366.762 gange
    • Dobbeltkrig: 888.024 gange
    • Trippelkrig: 60.727 gange
    • Firdobbeltkrig: 3.852 gange
    • Femdobbeltkrig: 206 gange
    • Seksdobbeltkrig: 10 gange

Her er programmet:

import random

krig1 = 0
krig2 = 0
krig3 = 0
krig4 = 0
krig5 = 0
krig6 = 0
krig7 = 0

number_of_plays_list = []
not_war = 0
war = 0

highest_deck_won = 0
highest_deck_lost = 0
equal_games = 0

i = 0
number_of_games = 1000000

while i <= number_of_games:
	number_of_plays_counter = 0
	deck = [2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,6,6,6,6,7,7,7,7,8,8,8,8,9,9,9,9,10,10,10,10,11,11,11,11,12,12,12,12,13,13,13,13,14,14,14,14]
	random.shuffle(deck)

	player_a_deck = deck[0:26]
	player_b_deck = deck[26:52]

	if sum(player_a_deck) > sum(player_b_deck):
		highest_deck = "a"
	elif sum(player_a_deck) < sum(player_b_deck):
		highest_deck = "b"
	else:
		highest_deck = "equal"

	while len(player_a_deck) > 0 and len(player_b_deck) > 0:
		number_of_plays_counter += 1
		if player_a_deck[0] > player_b_deck[0]:
			not_war += 1
			player_a_deck.append(player_a_deck[0])
			player_a_deck.append(player_b_deck[0])
			del player_a_deck[0]
			del player_b_deck[0]
		elif player_a_deck[0] < player_b_deck[0]:
			not_war += 1
			player_b_deck.append(player_b_deck[0])
			player_b_deck.append(player_a_deck[0])
			del player_a_deck[0]
			del player_b_deck[0]
		elif player_a_deck[0] == player_b_deck[0]:
			war += 1
			krig1 += 1
			if len(player_a_deck) >= 5 and len(player_b_deck) >= 5:
				if player_a_deck[4] > player_b_deck[4]:
					player_a_deck.extend(player_a_deck[0:5])
					player_a_deck.extend(player_b_deck[0:5])
					del player_a_deck[0:5]
					del player_b_deck[0:5]
				elif player_a_deck[4] < player_b_deck[4]:
					player_b_deck.extend(player_b_deck[0:5])
					player_b_deck.extend(player_a_deck[0:5])
					del player_a_deck[0:5]
					del player_b_deck[0:5]
				elif player_a_deck[4] == player_b_deck[4]:
					krig2 += 1
					if len(player_a_deck) >= 9 and len(player_b_deck) >= 9:			
						if player_a_deck[8] > player_b_deck[8]:
							player_a_deck.extend(player_a_deck[0:9])
							player_a_deck.extend(player_b_deck[0:9])
							del player_a_deck[0:9]
							del player_b_deck[0:9]
						elif player_a_deck[8] < player_b_deck[8]:
							player_b_deck.extend(player_b_deck[0:9])
							player_b_deck.extend(player_a_deck[0:9])
							del player_a_deck[0:9]
							del player_b_deck[0:9]	
						elif player_a_deck[8] == player_b_deck[8]:
							krig3 += 1
							if len(player_a_deck) >= 13 and len(player_b_deck) >= 13:
								if player_a_deck[12] > player_b_deck[12]:
									player_a_deck.extend(player_a_deck[0:13])
									player_a_deck.extend(player_b_deck[0:13])
									del player_a_deck[0:13]
									del player_b_deck[0:13]
								elif player_a_deck[12] < player_b_deck[12]:
									player_b_deck.extend(player_b_deck[0:13])
									player_b_deck.extend(player_a_deck[0:13])
									del player_a_deck[0:13]
									del player_b_deck[0:13]	
								elif player_a_deck[12] == player_b_deck[12]:
									krig4 += 1
									if len(player_a_deck) >= 17 and len(player_b_deck) >= 17:
										if player_a_deck[16] > player_b_deck[16]:
											player_a_deck.extend(player_a_deck[0:17])
											player_a_deck.extend(player_b_deck[0:17])
											del player_a_deck[0:17]
											del player_b_deck[0:17]
										elif player_a_deck[16] < player_b_deck[16]:
											player_b_deck.extend(player_b_deck[0:17])
											player_b_deck.extend(player_a_deck[0:17])
											del player_a_deck[0:17]
											del player_b_deck[0:17]
										elif player_a_deck[16] == player_b_deck[16]:
											krig5 += 1
											if len(player_a_deck) >= 21 and len(player_b_deck) >= 21:
												if player_a_deck[20] > player_b_deck[20]:
													player_a_deck.extend(player_a_deck[0:21])
													player_a_deck.extend(player_b_deck[0:21])
													del player_a_deck[0:21]
													del player_b_deck[0:21]
												elif player_a_deck[20] < player_b_deck[20]:
													player_b_deck.extend(player_b_deck[0:21])
													player_b_deck.extend(player_a_deck[0:21])
													del player_a_deck[0:21]
													del player_b_deck[0:21]										
												elif player_a_deck[20] == player_b_deck[20]:
													krig6 += 1
													if len(player_a_deck) >= 25 and len(player_b_deck) >= 25:
														if player_a_deck[24] > player_b_deck[24]:
															player_a_deck.extend(player_a_deck[0:25])
															player_a_deck.extend(player_b_deck[0:25])
															del player_a_deck[0:25]
															del player_b_deck[0:25]
														elif player_a_deck[24] < player_b_deck[24]:
															player_b_deck.extend(player_b_deck[0:25])
															player_b_deck.extend(player_a_deck[0:25])
															del player_a_deck[0:25]
															del player_b_deck[0:25]
														elif player_a_deck[24] == player_b_deck[24]:
															krig7 += 1
															break
													else:
														break
											else:
												break
									else:
										break
							else:
								break
					else:
						break
			else:
				break
	if len(player_a_deck) > len(player_b_deck):
		if highest_deck == "a":
			highest_deck_won += 1
		elif highest_deck == "b"	:
			highest_deck_lost += 1
	elif len(player_a_deck) < len(player_b_deck):
		if highest_deck == "a":
			highest_deck_lost += 1
		elif highest_deck == "b"	:
			highest_deck_won += 1
	else:
		equal_games += 1
	number_of_plays_list.append(number_of_plays_counter)
	i += 1
	print(i/number_of_games)

print("Der blev spillet {} spil".format(number_of_games))
print("Det gennemsnitlige antal dueller var {}".format(sum(number_of_plays_list)/len(number_of_plays_list)))
print("Det højeste antal dueller var {}".format(max(number_of_plays_list)))
print("Det laveste antal dueller var {}".format(min(number_of_plays_list)))
print("Den spiller med højest sum af kort vandt {} gange".format(highest_deck_won))
print("Den spiller med højest sum af kort tabte {} gange".format(highest_deck_lost))
print(krig1, krig2, krig3, krig4, krig5, krig6, krig7)
print(not_war, war)
print("Uafgjorte spil: {}".format(equal_games))
Categories
blandet

Hent valutadata fra Nordnet med Python

Nordnet har fået en ny hjemmeside med en markedsoversigt, hvor man blandt andet kan finde valutakurser. Dem ville jeg gerne have fat i til et Excelark 🙂

Jeg trykkede F12 i min browser for at undersøge, hvad der sker, når jeg klikker “Valutaer” på siden, og hvad der krævedes af cookies og klient-identifikation for at få data tilbage. (Du kan læse mere om metoden i mange af mine andre programmeringsindlæg.)

Det endte med dette program, der genererer en CSV-fil med den seneste valutakurs for en række almindelige valuter fra Nordnet:

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com
""" This program gets currency data from Nordnet.
Handy for exporting to Excel with as few manual steps as possible """
import requests 

# Creates a dictionary to use for cookies	
cookies = {}

# Sets NEXT cookie
url = 'https://www.nordnet.dk/markedet'
r = requests.get(url)
cookies['NEXT'] = r.cookies['NEXT']

# Requests currency data
headers = {'client-id': 'NEXT'}

# Gets currency data
url = 'https://www.nordnet.dk/api/2/instrument_search/query/indicator?entity_type=CURRENCY&amp;apply_filters=market_overview_group%3DDK_GLOBAL_MO'
r = requests.get(url, cookies=cookies, headers=headers)
currencies = r.json()

# Generate CSV output of last value by looping through currencies
output = "navn;senest\n"
for currency in currencies['results']:
	name = currency['instrument_info']['name']
	price = str(currency['price_info']['last']['price'])	
	price = price.replace(".",",")
	output += name + ";" + price + "\n"

# Write CSV output to file #
with open("currency.csv", "w", encoding='utf8') as fout:
	fout.write(output)
Categories
blandet

Wallnots Twitter-bot: Version 2

Det er ikke mange dage siden, at Wallnot.dk‘s Twitter-bot gik i luften. Du kan finde botten her og mit indlæg om den her.

Robotten virkede sådan set fint nok, men pga. en begrænsning i Twitter’s API på 250 forespørgsler per måned, kunne jeg kun opdatere 4 gange i døgnet, og det er jo ret sjældent (det gamle program lavede 2 forespørgsler, hver gang det blev kørt, dvs. 30 dage * 4 opdateringer * 2 forespørgsler = 240 forespørgsler).

Heldigvis fandt jeg TWINT, et Python-modul der gør det nemt at hente data fra Twitter uden at gøre brug af Twitter’s API med dets kedelige begrænsninger.

Med genbrug af det meste af min gamle kode, har jeg nu lavet en version af robotten, der kan køre lige så tit, jeg har lyst til. Jeg har foreløbig sat den til at køre 4 gange i timen.

For sjov skyld har jeg også tilføjet en række venlige adjektiver om abonnenterne på Politiken og Zetland, som programmet vælger tilfældigt mellem, hver gang det lægger et link på Twitter.

Den færdige kode

Her er den færdige kode, hvis du er interesseret.

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com

import requests
from bs4 import BeautifulSoup
from datetime import datetime
from datetime import date
from datetime import timedelta
import json
import time
import random
import twint	# https://github.com/twintproject/twint
from TwitterAPI import TwitterAPI

# CONFIGURATION #
# List to store articles to post to Twitter
articlestopost = []

# Yesterday's date variable
yesterday = date.today() - timedelta(days=1)
since = yesterday.strftime("%Y-%m-%d")

# Twint configuration
c = twint.Config()
c.Hide_output = True
c.Store_object = True
c.Since = since

# API LOGIN
client_key = ''
client_secret = ''
access_token = ''
access_secret = ''
api = TwitterAPI(client_key, client_secret, access_token, access_secret)


# POLITIKEN #
# Run search
searchterm = "politiken.dk/del"
c.Search = searchterm
twint.run.Search(c)
tweets = twint.output.tweets_object

# Add urls in tweets to list and remove any duplicates from list
urllist = []
for tweet in tweets:
	for url in tweet.urls:
		if searchterm in url:
			urllist.append(url)

urllist = list(set(urllist))

# Only proces urls that were not in our last Twitter query
proceslist = []
with open("./pol_lastbatch.json", "r", encoding="utf8") as fin:
	lastbatch = list(json.load(fin))
	for url in urllist:
		if url not in lastbatch:
			proceslist.append(url)
# Save current query to use for next time
with open("./pol_lastbatch.json", "wt", encoding="utf8") as fout:
	lastbatch = json.dumps(urllist)
	fout.write(lastbatch)

# Request articles and get titles and dates and sort by dates
articlelist = []
titlecheck = []

for url in proceslist:
	try:
		data = requests.get(url)
		result = data.text
		if '"isAccessibleForFree": "True"' not in result:
			soup = BeautifulSoup(result, "lxml")
			# Finds titles and timestamps
			title = soup.find('meta', attrs={'property':'og:title'})
			title = title['content']
			timestamp = soup.find('meta', attrs={'property':'article:published_time'})
			timestamp = timestamp['content']
			dateofarticle = datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S%z')
			realurl = data.history[0].headers['Location']
			if title not in titlecheck:
				articlelist.append({"title": title, "url": realurl, "date": dateofarticle})
				titlecheck.append(title)			
	except Exception as e:
		print(url)
		print(e)
			
articlelist_sorted = sorted(articlelist, key=lambda k: k['date']) 

# Check if article is already posted and update list of posted articles
with open("./pol_published.json", "r", encoding="utf8") as fin:
	alreadypublished = list(json.load(fin))
	# File below used for paywall.py to update wallnot.dk
	with open("./pol_full_share_links.json", "r", encoding="utf8") as finalready:	
		alreadypublishedalready = list(json.load(finalready))
		for art in articlelist_sorted:
			url = art['url']
			token = url.index("?shareToken")
			url = url[:token]
			if url not in alreadypublished:
				alreadypublished.append(url)
				articlestopost.append(art)
				alreadypublishedalready.append(art['url'])
		# Save updated already published links
		with open("./pol_published.json", "wt", encoding="utf8") as fout:
			alreadypublishedjson = json.dumps(alreadypublished)
			fout.write(alreadypublishedjson)
		with open("./pol_full_share_links.json", "wt", encoding="utf8") as fout:
			alreadypublishedjson = json.dumps(alreadypublishedalready)
			fout.write(alreadypublishedjson)


# ZETLAND #
# Run search
searchterm = "zetland.dk/historie"
c.Search = searchterm
twint.run.Search(c)
tweets = twint.output.tweets_object

# Add urls in tweets to list and remove any duplicates from list
urllist = []
for tweet in tweets:
	for url in tweet.urls:
		if searchterm in url:
			urllist.append(url)

urllist = list(set(urllist))

# Only proces urls that were not in our last Twitter query
proceslist = []
with open("./zet_lastbatch.json", "r", encoding="utf8") as fin:
	lastbatch = list(json.load(fin))
	for url in urllist:
		if url not in lastbatch:
			proceslist.append(url)
# Save current query to use for next time
with open("./zet_lastbatch.json", "wt", encoding="utf8") as fout:
	lastbatch = json.dumps(urllist)
	fout.write(lastbatch)

# Request articles and get titles and dates and sort by dates
articlelist = []
titlecheck = []

for url in proceslist:
	try:
		data = requests.get(url)
		result = data.text
		soup = BeautifulSoup(result, "lxml")
		title = soup.find('meta', attrs={'property':'og:title'})
		title = title['content']
		timestamp = soup.find('meta', attrs={'property':'article:published_time'})
		timestamp = timestamp['content']
		timestamp = timestamp[:timestamp.find("+")]
		dateofarticle = datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S.%f')
		if title not in titlecheck:
			articlelist.append({"title": title, "url": url, "date": dateofarticle})
			titlecheck.append(title)
	except Exception as e:
		print(url)
		print(e)
			
articlelist_sorted = sorted(articlelist, key=lambda k: k['date']) 

# Check if article is already posted and update list of posted articles
with open("./zet_published.json", "r", encoding="utf8") as fin:
	alreadypublished = list(json.load(fin))
	for art in articlelist_sorted:
		title = art['title']
		if title not in alreadypublished:
			alreadypublished.append(title)
			articlestopost.append(art)
	# Save updated already published links
	with open("./zet_published.json", "wt", encoding="utf8") as fout:
		alreadypublishedjson = json.dumps(alreadypublished, ensure_ascii=False)
		fout.write(alreadypublishedjson)


# POST TO TWITTER #
friendlyterms = ["flink","rar","gavmild","velinformeret","intelligent","sød","afholdt","bedårende","betagende","folkekær","godhjertet","henrivende","smagfuld","tækkelig","hjertensgod","graciøs","galant","tiltalende","prægtig","kær","godartet","human","indtagende","fortryllende","nydelig","venlig","udsøgt","klog","kompetent","dygtig","ejegod","afholdt","omsorgsfuld","elskværdig","prægtig","skattet","feteret"]
enjoyterms = ["God fornøjelse!", "Nyd den!", "Enjoy!", "God læsning!", "Interessant!", "Spændende!", "Vidunderligt!", "Fantastisk!", "Velsignet!", "Glæd dig!", "Læs den!", "Godt arbejde!", "Wauv!"]

if articlestopost:
	for art in articlestopost:
		if "zetland" in art['url']:
			medium = "Zetland"
		else:
			medium = "Politiken"
		friendlyterm = random.choice(friendlyterms)
		enjoyterm = random.choice(enjoyterms)
		status = "En " + friendlyterm + " abonnent på " + medium + " har delt en artikel. " + enjoyterm + " " + art['url']
		r = api.request('statuses/update', {'status': status})
		time.sleep(15)
Categories
blandet

Wallnots nye Twitter-robot

Opdatering: Jeg har lavet en ny, forbedret udgave af robottten. Læs om den her.

Både Zetland og Politiken har en feature, hvor abonnenter kan dele betalingsartikler med venner, bekendte og offentligheden. Artiklen får en unik URL, som låser op for betalingsmuren og lader alle og enhver læse artiklen.

Jeg tænkte at Wallnot – min hjemmeside med artikler, der ikke er bag betalingsmur – trængte til at være mere til stede på sociale medier.

Derfor har jeg lavet en robot, der gennemsøger Twitter for delte artikler fra Zetland og Politiken – og deler links’ne som tweets. Robotten opdater ca. på klokkeslettene 8.25, 12.25, 16.25 og 20.25. Det ville være skønt at kunne opdatere flere gange i døgnet, men så vil Twitter have penge.

Du finder Wallnots nye Twitter-robot her: https://twitter.com/wallnot_dk

Sådan ser det ud, når Wallnots robot tweeter.

Jeg brugte Python og modulet TwitterAPI. Hvis du selv vil køre programmet, skal du lave en udviklerkonto og en app hos Twitter. Se
https://developer.twitter.com/.

Her er det færdige program.

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com
# THIS PROGRAM POSTS NEW SHARED ARTICLES FROM ZETLAND.DK AND POLITIKEN.DK TO TWITTER

import requests
from bs4 import BeautifulSoup
from datetime import datetime
import json
import time
from nested_lookup import nested_lookup
from TwitterAPI import TwitterAPI

articlestopost = []

# API LOGIN - INSERT YOUR OWN VALUES HERE
client_key = ''
client_secret = ''
access_token = ''
access_secret = ''
api = TwitterAPI(client_key, client_secret, access_token, access_secret)


# POLITIKEN.DK SEARCH #
SEARCH_TERM = 'url:"politiken.dk/del/"'
PRODUCT = '30day'
LABEL = 'prod'

r = api.request('tweets/search/%s/:%s' % (PRODUCT, LABEL), 
                {'query':SEARCH_TERM})

tweet_data = json.loads(r.text)
prettyjson = json.dumps(tweet_data, ensure_ascii=False, indent=4) # Only needed for debugging to pretify json

# Looks for all instances of expanded_url in json	
linklist = list(set(nested_lookup('expanded_url', tweet_data)))

urllist = []
for link in linklist:
	if "politiken.dk/del" in link:
		urllist.append(link)

# Request articles and get titles and dates and sort by dates
articlelist = []
titlecheck = []

for url in urllist:
	try:
		data = requests.get(url)
		result = data.text
		if '"isAccessibleForFree": "True"' not in result:
			soup = BeautifulSoup(result, "lxml")
			# Finds titles and timestamps
			title = soup.find('meta', attrs={'property':'og:title'})
			title = title['content']
			timestamp = soup.find('meta', attrs={'property':'article:published_time'})
			timestamp = timestamp['content']
			dateofarticle = datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S%z')
			realurl = data.history[0].headers['Location']
			if title not in titlecheck:
				articlelist.append({"title": title, "url": realurl, "date": dateofarticle})
				titlecheck.append(title)			
	except:
		print(url)
			
articlelist_sorted = sorted(articlelist, key=lambda k: k['date'], reverse=True) 

# Check if article is already posted and update list of posted articles
with open("./pol_published.json", "r", encoding="utf8") as fin:
	alreadypublished = list(json.load(fin))
	for art in articlelist_sorted:
		url = art['url']
		token = url.index("?shareToken")
		url = url[:token]
		if url not in alreadypublished:
			alreadypublished.append(url)
			articlestopost.append(art)
	# Save updated already published links
	with open("./pol_published.json", "wt", encoding="utf8") as fout:
		alreadypublishedjson = json.dumps(alreadypublished)
		fout.write(alreadypublishedjson)

# ZETLAND.DK SEARCH #
SEARCH_TERM = 'url:"zetland.dk/historie"'
PRODUCT = '30day'
LABEL = 'prod'

r = api.request('tweets/search/%s/:%s' % (PRODUCT, LABEL), 
                {'query':SEARCH_TERM})

tweet_data = json.loads(r.text)
prettyjson = json.dumps(tweet_data, ensure_ascii=False, indent=4) # Only needed for debugging to pretify json

# Looks for all instances of expanded_url in json	
linklist = list(set(nested_lookup('expanded_url', tweet_data)))

urllist = []
for link in linklist:
	if "zetland.dk/historie" in link:
		urllist.append(link)

# Request articles and get titles and dates and sort by dates
articlelist = []
titlecheck = []

for url in urllist:
	try:
		data = requests.get(url)
		result = data.text

		# Soup site and create a dictionary of links and their titles and dates
		articledict = {}
		soup = BeautifulSoup(result, "lxml")

		title = soup.find('meta', attrs={'property':'og:title'})
		title = title['content']
		
		timestamp = soup.find('meta', attrs={'property':'article:published_time'})
		timestamp = timestamp['content']
		timestamp = timestamp[:timestamp.find("+")]
		dateofarticle = datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S.%f')
		
		if title not in titlecheck:
			articlelist.append({"title": title, "url": url, "date": dateofarticle})
			titlecheck.append(title)
	except:
		print(url)
			
articlelist_sorted = sorted(articlelist, key=lambda k: k['date'], reverse=True) 

# Check if article is already posted and update list of posted articles
with open("./zet_published.json", "r", encoding="utf8") as fin:
	alreadypublished = list(json.load(fin))
	for art in articlelist_sorted:
		title = art['title']
		if title not in alreadypublished:
			alreadypublished.append(title)
			articlestopost.append(art)
	# Save updated already published links
	with open("./zet_published.json", "wt", encoding="utf8") as fout:
		alreadypublishedjson = json.dumps(alreadypublished, ensure_ascii=False)
		fout.write(alreadypublishedjson)


# POST TO TWITTER #
if articlestopost:
	for art in articlestopost:
		if "zetland" in art['url']:
			medium = "Zetland"
		else:
			medium = "Politiken"
		status = "En flink abonnent på " + medium + " har delt en betalingsartikel. God fornøjelse! " + art['url']
		r = api.request('statuses/update', {'status': status})
		time.sleep(5)
Categories
blandet

Opdateret Python-program til at få E-boksbeskeder sendt på mail

Jeg har tidligere skrevet om mit Python-program til at få beskeder fra E-boks sendt på mail.

Nu har E-boks opgraderet sikkerheden, sådan man skal bruge Nemid til at aktivere E-boks på sin mobiltelefon, inden man får lov til at læse E-boks uden brug af Nemid.

Derfor var jeg nødt til at fikse mit program.

Løsningen blev at
bruge Android-emulatoren NOX, installerede app’en, aktivere med nemid og bruge Charles til at overvåge internettrafikken. Derefter genbrugte jeg app’ens kommunikation med E-boks’ server i mit program.

App’en sender en header med “deviceid” og en lang “challenge”-streng og xml-indhold med CPR-nummer og password for at logge på E-boks med “slevel 25”. Mon ikke “slevel” betyder security level?

I mit program kopierede jeg headeren og indholdet fra Charles ind min første kommunikation med E-boks (har redigeret noget ud):

content = '<?xml version="1.0" encoding="utf-8"?><Logon xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="urn:eboks:mobile:1.0.0"><App version="3.6.1-MOBILEACCESS2.210" os="Android" osVersion="4.4.2" device="SM-G925F" /><User identity="[mit cpr-nummer]" identityType="P" nationality="DK" pincode="[mit password]" /></Logon>'

authstr = 'logon deviceid="[en id-streng]", datetime="2019-05-29 06:25:51Z", challenge="[en meget lang streng]"'

Det færdige program

Hvis du ønsker at bruge det færdige program, skal du:

  1. Installere Nox (eller en anden Android-emulator)
  2. Installere Charles (eller et andet program til at overvåge internettrafik)
  3. Opsætte Nox til at bruge Charles’ IP som proxy-server
  4. Åbne op for at kunne se SSL-traffik i Charles
  5. Installere E-boks-app’en i Nox og aktivere app’en med dit nemid
  6. Starte E-boks-app’en igen og logge på med dit CPR-nummer og password og logge trafikken i Charles
  7. Kopiere “content” og “authstr” fra Charles (se skærmbillede)og sætte ind i programmet hvor der står hhv. content = ” og authstr = ”

Held og lykke!

# -*- coding: utf-8 -*-
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com.
# Based on https://github.com/dk/Net-Eboks perl API for eboks.dk by Dmitry Karasik. Thanks!
""" This program logs on to e-boks.dk and takes new messages and sends them
to an e-mail. It requires prior Nemid activation using the E-boks app, e.g.
running the Nox emulator and a traffic monitor such as Charles. It also
requires access to a secure (SSL) SMTP server and mail account for sending e-mails. """

# Necessary modules
from datetime import datetime						# Current date and time
import requests										# Communicating with E-boks
import xml.etree.ElementTree as ET					# Parse E-boks XML responses
import smtplib										# Sending e-mails
from email.mime.multipart import MIMEMultipart		# Creating multipart e-mails
from email.mime.text import MIMEText				# Attaching text to e-mails
from email.mime.application import MIMEApplication	# Attaching pdf to e-mails
from email.mime.image import MIMEImage				# Attaching images to e-mails
from email.utils import formataddr					# Used for correct encoding of senders with special characters in name (e.g. Københavns Kommune)
import chardet										# Text message character set detection
import time											# Pause between e-mails sent

# Configuration data
data = {
'emailserver': '', 	# Your mail server hostname: host.server.dk
'emailserverport': ,					# Mail server port, e.g. 465
'emailusername': '',	# Sender mail account username
'emailpassword': '',		# Sender mail account password
'emailfrom': '',		# Sender e-mail, e.g. trump@usa.gov
'emailto': '',		# Recipient e-mail, e.g. hillary@clinton.net
'numberofmessagesperfolder': '10',		# Number of messages to request (10 is usually enough)
'unreadstatusvalue': "true",			# Normally "true". If "false" also read messages are sent
'unreadmorethan': 0,					# Normally 0, only unread messages are sent. If -1 all messages are sent
'sendemails': True,						# If True, e-mails are sent, if False, they are not
'country': 'DK',
'type': 'P',
'deviceid': '2',    # Use id from http traffic monitor
'datetime': '',
'root': 'rest.e-boks.dk',
'nonce': '',
'sessionid': '',
'response': '523b931af795698785df1eb85e8c10ea0687a46edb3a48468943f2d368fe725a',
'uid': '',
'uname': '',
'challenge': ''
}

# Gets current date and time for E-boks challenge
now = datetime.now()
data['datetime'] = datetime.strftime(now, '%Y-%m-%d %H:%M:%SZ')

# These functions are used to create sessionid, nonce and authstring values for communicating
# with E-boks throughout the program
def sessionid(authenticate):
	sessionstart = authenticate.find('sessionid="')+len('sessionid="')
	sessionend = authenticate.find('"', sessionstart)
	data['sessionid'] = authenticate[sessionstart:sessionend]

def nonce(authenticate):
	noncestart = authenticate.find('nonce="')+len('nonce="')
	nonceend = authenticate.find('"', noncestart)
	data['nonce'] = authenticate[noncestart:nonceend]

def createauthstring():
	authstr = 'deviceid="' + data['deviceid'] + '",nonce="' + data['nonce'] + ',sessionid="' + data['sessionid'] + '",response="' + data['response'] + '"'
	return authstr

# Logon to mail server
server = smtplib.SMTP_SSL(data['emailserver'], data['emailserverport'])
server.login(data['emailusername'], data['emailpassword'])
	
# First logon to e-boks
url = "https://" + data['root'] + "/mobile/1/xml.svc/en-gb/session"

# Use XML string from http traffic monitor
content = ''

# Use string from http traffic monitor
authstr = ''

headers = {
'Content-Type': 'application/xml',
'Content-Length': str(len(content)),
'X-EBOKS-AUTHENTICATE': authstr,
'Accept': '*/*',
'Accept-Language': 'en-US',
'Accept-Encoding': 'gzip,deflate',
'Host': data['root'],
}

r = requests.put(url, headers=headers, data=content)

authenticate = r.headers['X-EBOKS-AUTHENTICATE']
nonce(authenticate)
sessionid(authenticate)

xml = ET.fromstring(r.text)
# Saves username and user id
data['uname'] = xml[0].attrib['name']
data['uid'] = xml[0].attrib['userId']

# Get folder data from e-boks
url = 'https://' + data['root'] + '/mobile/1/xml.svc/en-gb/' + data['uid'] + '/0/mail/folders'
authstr = createauthstring()

headers = {
'X-EBOKS-AUTHENTICATE': authstr,
'Accept': '*/*',
'Accept-Language': 'en-US',
'Host': data['root'],
}

r = requests.get(url, headers=headers)
authenticate = r.headers['X-EBOKS-AUTHENTICATE']
nonce(authenticate)

xml = ET.fromstring(r.text)

eboks_folders = xml

# Get folder id's and numbers of unread messages
for folder in eboks_folders:
	folderid = folder.attrib['id']
	unread = folder.attrib['unread']
	
	# Get messages ONLY if any unread messages in folder
	if int(unread) > data['unreadmorethan']:		# Usually > 0. Can be changed to == 0 for debugging purposes
		
		# Get list of messages
		url = 'https://' + data['root'] + '/mobile/1/xml.svc/en-gb/' + data['uid'] + '/0/mail/folder/' + folderid
		
		authstr = createauthstring()

		headers = {
		'X-EBOKS-AUTHENTICATE': authstr,
		'Accept': '*/*',
		'Accept-Language': 'en-US',
		'Host': data['root'],
		}

		params = {
		'skip': '0',
		'take': data['numberofmessagesperfolder']
		}

		r = requests.get(url, headers=headers, params=params)
		authenticate = r.headers['X-EBOKS-AUTHENTICATE']
		nonce(authenticate)		
		
		xml = ET.fromstring(r.text)
		eboks_messages = xml

		i = 0
		max = int(params['take']) - 1
		while i <= max:
			for child in eboks_messages:
				messageid = child[i].attrib['id']
				subject  = child[i].attrib['name']
				sender = child[i][0].text
				unreadstatus = child[i].attrib['unread']
				attachmentcount = child[i].attrib['attachmentsCount']
				format = child[i].attrib['format'].lower()
				received = child[i].attrib['receivedDateTime']
				
				i += 1
				
				# Get only messages that are unread
				if unreadstatus == data['unreadstatusvalue']:	# Usually true. Can be changed to false for debugging purposes
					
					# Start e-mail
					msg = MIMEMultipart()
					msg['From'] = formataddr((sender, data['emailfrom']))
					msg['To'] = data['emailto']
					msg['Subject'] = "E-boks: " + subject
					body = ""
					
										
					# Get message (marks it as read)
					url = 'https://' + data['root'] + '/mobile/1/xml.svc/en-gb/' + data['uid'] + '/0/mail/folder/' + folderid + '/message/' + messageid

					authstr = createauthstring()

					headers = {
					'X-EBOKS-AUTHENTICATE': authstr,
					'Accept': '*/*',
					'Accept-Language': 'en-US',
					'Host': data['root'],
					}

					r = requests.get(url, headers=headers)
					authenticate = r.headers['X-EBOKS-AUTHENTICATE']
					nonce(authenticate)
					
					# Get primary message content
					url = 'https://' + data['root'] + '/mobile/1/xml.svc/en-gb/' + data['uid'] + '/0/mail/folder/' + folderid + '/message/' + messageid + '/content'

					authstr = createauthstring()

					headers = {
					'X-EBOKS-AUTHENTICATE': authstr,
					'Accept': '*/*',
					'Accept-Language': 'en-US',
					'Host': data['root'],
					}

					r = requests.get(url, headers=headers)
					authenticate = r.headers['X-EBOKS-AUTHENTICATE']
					nonce(authenticate)
					
					# Attach primary message content to e-mail
					if format in ("txt","text","plain"):
						characterset = chardet.detect(r.content)
						r.encoding = characterset['encoding']
						body = r.text
						msg.attach(MIMEText(body, 'plain'))
					elif format in ("html","htm"):
						characterset = chardet.detect(r.content)
						r.encoding = characterset['encoding']
						body = r.text
						msg.attach(MIMEText(body, 'html'))						
					elif format == "pdf":
						filename = "".join().rstrip() + "." + format
						part = MIMEApplication(r.content)
						part.add_header('Content-Disposition', 'attachment', filename = filename)
						msg.attach(part)
					elif format in ("gif","jpg","jpeg","tiff","tif","webp"):
						filename = "".join().rstrip() + "." + format
						part = MIMEImage(r.content)
						part.add_header('Content-Disposition', 'attachment', filename = filename)
						msg.attach(part)
										
					# Get attachment data if message has attachments
					if int(attachmentcount) > 0:

						url = 'https://' + data['root'] + '/mobile/1/xml.svc/en-gb/' + data['uid'] + '/0/mail/folder/' + folderid + '/message/' + messageid

						authstr = createauthstring()

						headers = {
						'X-EBOKS-AUTHENTICATE': authstr,
						'Accept': '*/*',
						'Accept-Language': 'en-US',
						'Host': data['root'],
						}

						r = requests.get(url, headers=headers)
						authenticate = r.headers['X-EBOKS-AUTHENTICATE']
						nonce(authenticate)

						xml = ET.fromstring(r.text)
						
						eboks_attachment = xml
						
						# Gets if, name and format of attachment
						for child in eboks_attachment:
							for subtree in child:
								attachmentid = subtree.attrib['id']
								attachmenttitle = subtree.attrib['name']
								attachmentformat = subtree.attrib['format']
				
								# Gets the actual attachment
								url = 'https://' + data['root'] + '/mobile/1/xml.svc/en-gb/' + data['uid'] + '/0/mail/folder/' + folderid + '/message/' + attachmentid + '/content'
									
								authstr = createauthstring()

								headers = {
								'X-EBOKS-AUTHENTICATE': authstr,
								'Accept': '*/*',
								'Accept-Language': 'en-US',
								'Host': data['root'],
								}

								r = requests.get(url, headers=headers)
								authenticate = r.headers['X-EBOKS-AUTHENTICATE']
								nonce(authenticate)
								
								# Attach attachment to e-mail
								if attachmentformat in ("txt","text","html","htm","plain"):
									filename = "".join().rstrip() + "." + attachmentformat
									r.encoding = "utf-8"
									part = MIMEText(r.text)
									part.add_header('Content-Disposition', 'attachment', filename = filename)
									msg.attach(part)
								elif attachmentformat == "pdf":
									filename = "".join().rstrip() + "." + attachmentformat
									part = MIMEApplication(r.content)
									part.add_header('Content-Disposition', 'attachment', filename = filename)
									msg.attach(part)
								elif attachmentformat in ("gif","jpg","jpeg","tiff","tif","webp"):
									filename = "".join().rstrip() + "." + attachmentformat
									part = MIMEImage(r.content)
									part.add_header('Content-Disposition', 'attachment', filename = filename)
									msg.attach(part)

					# Send e-mail
					if data['sendemails'] == True:
						print("sending")
						msg.attach(MIMEText(body, 'plain'))
						server.sendmail(data['emailfrom'], data['emailto'], msg.as_string())
						time.sleep(2)