Til kamp mod phishing på lnk.dk

Der er desværre nogle kriminelle, der har opdaget min kortlink-service lnk.dk og bruger siden til at lave korte links, der peger på forskellige phishing-formularer. De fleste på fransk, enkelte på dansk.

Jeg vil helst kun have, at min side bruges til lovlige formål, og derfor har jeg i første omgang lavet et kontrolspørgsmål i formularen til at oprette links. Jeg håber, at det kun er ærlige mennesker, der kan svare på spørgsmålet, og at det er relativt nemt for dem:

Nyt kontrolspørgsmål om en kendt dansk cykelrytter på lnk.dk

For at implementere det nye felt, redigerede jeg min Django-applikations forms.py med feltet og krav til validering:

from django.forms import ModelForm
from django import forms
from .models import Link
from django.core.exceptions import ValidationError

class LinkForm(ModelForm):
    everyoneknows = forms.CharField(label='Hvad er fornavnet på cykelrytteren, der vandt Tour de France for mænd i 2022?', error_messages={'required': 'Indtast cykelrytterens fornavn'})

    def clean_everyoneknows(self):
        answer = self.cleaned_data['everyoneknows'].lower()
        if answer != 'jonas':
            raise ValidationError("Det fornavn, du har indtastet, er forkert.")
        return answer
    
    def __init__(self, *args, **kwargs):
        super(LinkForm, self).__init__(*args, **kwargs)
        self.fields['destination'].widget.attrs['placeholder'] = 'https://eksempel.dk/meget/lang/url'
        self.fields['shortlink'].widget.attrs['placeholder'] = 'eksempel'
        self.fields['shortlink'].label_suffix = ""  # Remove colon after label
        self.fields['shortlink'].required = False   # Not required in form

    def clean_shortlink(self):
        shortlink = self.cleaned_data['shortlink']
        return shortlink.lower()
    
    class Meta:
        model = Link
        fields = ['destination', 'shortlink']
        labels = {
            'shortlink': ('Evt. selvvalgt kort link:'),
        }
        error_messages = {
            'destination': {
                'max_length': ('Din destinationsurl er for lang til denne kortlinkservice.'),
                'invalid': ('Din destinationsurl er ikke en gyldig adresse. Husk http://, https:// eller ftp:// foran dit link, hvis du har glemt det.'),
            },
            'shortlink': {
                'unique': ('Det selvvalgte link, du har valgt, er allerede i brug. Find på et andet.'),
                'max_length': ('Dit selvvalgte link må maksimalt være 100 tegn langt.'),
                'invalid': ('Du kan kun bruge bogstaver (dog ikke æ, ø, å - kun ASCII-tegnsættet), tal, bindestreg og understreg i din selvvalgte adresse.'),
            }
        }

Det bliver spændende at se, om ændringen har nogen effekt.

Google AdSense is a mess

Today I received an e-mail telling me about something to do with making money through ads. I used to have an AdSense account for making sweet money on the internet, but I closed it years ago, partly for earning hardly anything, partly for being tired of tracking my few website users for cents.

I tried the Sign in link in the e-mail to see if I could possibly get rid of future mailings about a product I am not using. This took me down a rabbit hole of errors and user-unfriendly help pages…

An unwanted mail from Google

When I clicked Sign in I got to a page saying my account was closed (this I knew), and would I like to reactivate my account?

It looked not unlike this image, which I found somewhere. Now I wish I took a screenshot, but for reasons I will disclose later, I am not able to access the page anymore:

Google telling me my account is closed, but still sending me e-mails about it

Does YouTube hold a solution?

Seeing as the original e-mail mentioned YouTube, I thought I might have a setting somewhere on YouTube I could disable to un-link my YouTube videos from my closed AdSense Account.

After 5 minutes of browsing I concluded that no such option existed.

In which I try to get support

Next I thought if I could just delete my AdSense account instead of merely having it closed, I might just get rid of further mailings.

I looked for a delete option somewhere, but none existed, so I tried Google AdSense Help. I tried more eloquent expressions than “delete adsense account”, but all options only led to something called Community. And DuckDuckGo’ing “delete adsense account” led to many Community requests for deleting accounts, but only answers such as You can’t ‘delete’ an Adsense account. You have to close it properly, following the official instructions. (I really like the quotations around delete in that quote.)

Community does not equal a contact option, Google.

Try the opposite!

Next I had a stupid idea. How about reactivating my account, look for an option to disable mailings and closing it again? Counter-intuitive, I know, but I have succeeded previously with similar tactics.

This happened when I clicked the reactivation link on the account closed page. The error is fully reproducible by clicking the link again, which I tried:

I finally got ENGINEERS on the case, but not in the way I hoped

I’ve got rights

As a citizen of Europe, I have certain rights. One of those is contacting big corporations holding data about me and telling them to delete my data and having them refuse due to something they call legitimate interests which roughly translates to making dollars by knowing my shopping interests.

I went back to AdSense support and slowly typed Fully delete my account under GDPR to let Google know I mean business.

I was happy to see an actual envelope icon in a button saying e-mail. I clicked. This happened:

I finally thought Google took me seriously, and maybe they do, just not seriously enough for a working e-mail contact

I tried again. Many times. Nothing changed.

Seriously, Google. AdSense is a mess.

Kommercielle danske hjemmesider respekterer ikke et nej tak

Det irriterer mig hele tiden at blive spurgt, om jeg har lyst til at blive overvåget med cookies på diverse hjemmesider. Det har jeg ikke.

Det undrede mig, at jeg, fx ved besøg på PriceRunner, synes jeg blev spurgt om det samme igen og igen i deres cookie-dialog:

PriceRunners hyppigt forstyrrende cookie-dialog.

Forskellige udløbsdatoer på “ja” og “nej”

Jeg undersøgte udløbsdatoerne på PriceRunners cookies, og det viste sig at:

  • Siger du “Nej tak” til cookies, udløber den cookie, der registrerer, at du ikke vil overvåges efter 2 timer. Når der er gået 2 timer, og du besøger PriceRunner igen, bliver du spurgt igen.
  • Siger du “Ja tak” til cookies, udløber den cookie, der registrerer at du gerne vil overvåges efter 1 år. Du kan altså besøge PriceRunner igen og igen (meget relevant her i Black Week) uden at blive forstyrret med nye cookie-spørgsmål.
Ja til cookies på Pricerunner. Du får lov til at genoverveje dit samtykke om 1 år.
Nej til cookies på Pricerunner. Du får lov til at genoverveje dit nej om 2 timer.

Må det være sværere at sige nej end ja?

I slutningen af oktober 2021 forsøgte jeg at få afklaret, om praksissen er/var lovlig.

De flinke folk i Datatilsynet har udarbejdet en vejledning om cookiedialogbokse. Hovedlinjen i vejledningen er, at det ikke skal være sværere at være “anti-cookie” end “pro-cookie”. Her er et par citater:

“Et samtykke skal være frivilligt. Formålet med betingelsen om frivillighed er at skabe gennemsigtighed for den registrerede og give den registrerede et valg og kontrol over sine personoplysninger. Et samtykke anses derfor ikke for at være afgivet frivilligt, hvis den registrerede ikke har et reelt eller frit valg.” (side 12)

“Elektroniske samtykkeanmodninger skal ikke være unødigt forstyrrende, men det kan samtidigt være nødvendigt, at samtykkeanmodningen til en vis grad forstyrrer brugeroplevelsen, hvis anmodningen skal være effektiv. Opgaven for den dataansvarlige er således at vælge en løsning, der rammer den rigtige balance.” (side 16)

“Derudover skal det generelt som nævnt ovenfor også være tilsvarende let at afstå fra at give samtykke til behandling af sine personoplysninger, som det er at give det.” (side 17)

Jeg synes det lød som om, at der var en chance for, at PriceRunners (med mange fleres) praksis var ulovlig.

Det var den (umiddelbart) ikke.

Ingen hjælp at hente hos Datatilsynet

Desværre synes Datatilsynet ikke, jeg har en sag. I mine øjne er det svært at få øje på andre begrundelser for hele tiden at spørge om samtykke ved nej til cookies, end at tilskynde til, at brugeren på et tidspunkt takker ja, samtidig med at risikoen for at klikke ja ved et uheld stiger, jo oftere man bliver spurgt.

Men: Det beskytter databeskyttelsesreglerne ikke mod i Datatilsynets vurdering. Her er kernen i argumentationen:

“[…]databeskyttelsesreglerne beskytter retten til at give
samtykke, herunder at betingelserne til et gyldigt samtykke er opfyldt. Det er i den forbindelse Datatilsynets vurdering, at uanset at hjemmesidens cookies har forskellige udløbstidspunkter baseret på, om du giver samtykke, eller afviser at give samtykke, ændrer dette ikke på, at du som bruger stadig har mulighed for at afvise hjemmesidens behandling af oplysninger om dig. Der gives således fortsat brugeren et frivilligt valg.

Datatilsynet vurderer endvidere, at det faktum, at du som bruger oplever flere gange at skulle afvise hjemmesidens behandling af personoplysninger ved brug af cookies, ikke er et element,
som databeskyttelsesreglerne beskytter den registrerede imod.”

Hvad gør jeg så?

Tja. Jeg forsøger at minimere mit forbrug af hjemmesider med tvivlsom etik.

Hvis jeg ikke kan lade være med at besøge dem, blokerer cookie-dialogen med mit reklameblokeringsbrowsertilføjelsesprogram.

Her er filtre til Pricerunner til uBlock Origin:

www.pricerunner.dk###consent
www.pricerunner.dk##+js(rc, noscroll, body)

Books I read, or: Python and Django let me realise my nerdiest dreams

I like to document my doings and for about 15 years I’ve been documenting the books I have read. First in Notepad, then in Excel and finally in Python and Django with a database somewhere in the background. I am amazed what experts help amateurs achieve.

Take a look at what I made

This post explains the proces of collecting data about my reads in little detail and in too great detail the code behind the page.

Some books of 2020
Statistics

Finding information ONLINE

Most data was crawled from Danish library ressources, Goodreads and Wikpedia with varying success. A lot was entered manually, especially with works in translation. I spent hours and hours being pedantic.

Even though librarians have been managing data longer than anyone else on the planet, there is no autoritative relational database where you can look up when some book by some author was first published and when the first Danish language version came out. In defence of librarians, many writers go to great lengths to make data management on books hard (one example is the genre “non-fiction novel” used by Spanish writer Javier Cercas).

The mysteries of Goodreads

I was mystified by the ability of Goodreads to place study guides and commentary to great works of literature first in their search results (and many more strange things) and terrified by Google displaying available nowhere else I could find on the web author birthdays on top of search results .

Also, Goodreads magically has editions of books that are older than when Goodreads claims the book was first published.

Goodreads: When what you’re searching for is nowhere near the first hit
How does this autocomplete work?

I wonder?

First published on April 5, but first listed edition is from March 23. Huh?

Adding books

After crawling for data, I made a form to add new books:

Step 1. Push “Look up”
PROFIT!

The form

This was a breeze in Django. Here’s forms.py:

from django.forms import ModelForm
from books.models import Author, Title, Read

class AuthorForm(ModelForm):
	class Meta:
		model = Author
		fields = ['first_name', 'last_name','gender','country','biography','birth_date','data_quality']
	
class TitleForm(ModelForm):
	class Meta:
		model = Title
		fields = ['title','genre','read_language','original_language','publisher','isbn','published_date','first_published','cover_url','ereolen_url','biblo_dk_url','good_reads_url','pages','original_title']	
		
class ReadForm(ModelForm):
	class Meta:
		model = Read
		fields = ['date']	

The view:

And here’s the logic from views.py (I probably shouldn’t uncritically be saving cover URLs found on the internet to my server, but):

# Add a read to database
@login_required	
def add_read(request):	
	book_saved = False
	author_form = AuthorForm()
	title_form = TitleForm()
	read_form = ReadForm()
	if request.method == 'POST':	# AND SUBMIT BUTTON
		author_form = AuthorForm(request.POST)
		title_form = TitleForm(request.POST)
		read_form = ReadForm(request.POST)
		if author_form.is_valid() and title_form.is_valid() and read_form.is_valid():
			author_data = author_form.cleaned_data
			title_data = title_form.cleaned_data
			read_data = read_form.cleaned_data

			existing_author = False
			existing_title = False
				
			# AUTHOR LOGIC - MAY ALSO MODIFY TITLE DATA
			# Check if already exist
			try:
				author = Author.objects.get(first_name=author_data['first_name'], last_name=author_data['last_name'])
				existing_author = True
				context['existing_author'] = existing_author
			except:
				if 'lookup' in request.POST:
					if any(not value for value in author_data.values()):
						author_data, title_data = get_author(author_data, title_data)	# try to fetch data
			
			# TITLE LOGIC - MAY ALSO MODIFY AUTHOR DATA
			# Check if title already exists, will only work is author has been found. (Book is re-read)
			try:
				if author:
					title = Title.objects.get(authors=author, title=title_data['title'])
					existing_title = True
					context['existing_title'] = True
			except:
				if 'lookup' in request.POST:
					if any(not value for value in title_data.values()):
						title_data, author_data = get_title(title_data, author_data)	# try to fetch data
			
			# Render form with data from database or collected data
			if 'lookup' in request.POST:
				if not existing_author:
					author_form = AuthorForm(author_data)
				else:
					author_form = AuthorForm(instance=author)
					
				if not existing_title:
					title_form = TitleForm(title_data)
				else:
					title_form = TitleForm(instance=title)
			
			# Save data
			if 'save' in request.POST:
				if not existing_author:
					author = author_form.save()
				if not existing_title:	
					title = title_form.save()
					title.authors.add(author)
					if title.cover_url:
						file = requests.get(title.cover_url, stream=True)
						save_location = settings.STATIC_ROOT + "books/covers/"

						if '.jpg' in title.cover_url:
							ending = '.jpg'
						elif '.png' in title.cover_url:
							ending = '.png'
						elif '.webp' in title.cover_url:
							ending = '.webp'
						else:
							ending = '.jpg'
						
						id = title.id
						filename = str(id) + ending
						with open(save_location+filename, 'wb') as f:
							file.raw.decode_content = True
							shutil.copyfileobj(file.raw, f)
						
						title.cover_filename = filename
						title.save()
						
						#create thumbnail							
						image = Image.open(save_location+filename).convert("RGB")
						maxsize = 150, 150
						image.thumbnail(maxsize, Image.ANTIALIAS)
						image.save(save_location+"150/"+str(id)+".webp", "WEBP")


				save_read = read_form.save(commit=False)
				save_read.title = title
				save_read = read_form.save()
				
				# Set save variable to True and display empty form
				book_saved = True
				author_form = AuthorForm()
				title_form = TitleForm()
				read_form = ReadForm()				
				
	context = {'author_form': author_form, 'title_form': title_form, 'read_form': read_form, 'book_saved': book_saved}
	return render(request, 'books/add.html', context)

The helper function

If you are a really curious and patient individual, you may be wondering about the get_author and get_title functions. You are in luck! Here is most of helpers.py which helps me scrape some data from the internet and will probably break in the future:

# HELPER FUNCTIONS #
def numbers_in_string(string):
	numbers = sum(character.isdigit() for character in string)
	return numbers

def get_author(author_data, title_data):
	# WIKIPEDIA
	if not author_data['biography']:
		if not author_data['country'] == 'da':
			url = 'https://en.wikipedia.org/w/index.php?search=intitle%3A%22' + author_data['first_name'] + " " + author_data['last_name'] + '%22&title=Special:Search&profile=advanced&fulltext=1&ns0=1'
		else:
			url = 'https://da.wikipedia.org/w/index.php?search=intitle%3A%22' + author_data['first_name'] + " " + author_data['last_name'] + '%22&title=Special:Search&profile=advanced&fulltext=1&ns0=1'
	else:
		url = author_data['biography']
			
	author_request = requests.get(url)

	if author_request.status_code == 200:
		soup = BeautifulSoup(author_request.text, "lxml")
		try:
			first_result = soup.find('div', {'class':'mw-search-result-heading'}).a['href']
			if not author_data['country'] == 'da':
				result_page = 'https://en.wikipedia.org' + first_result
			else:
				result_page = 'https://da.wikipedia.org' + first_result
			page_request = requests.get(result_page)
			soup = BeautifulSoup(page_request.text, "lxml")
			# If not provided, set biography
			if not author_data['biography']:
				author_data['biography'] = result_page
			# If not provided, try to get birth_date
			if not author_data['birth_date']:
				try:
					birthday = soup.find('span', {'class':'bday'}).string
					author_data['birth_date'] = datetime.strptime(birthday, '%Y-%m-%d')
				except:
					try:
						birthday = soup.find('th', text="Født").parent.get_text()
						# sometimes the above doesn't return a space between year and next info causing a fuckup
						try:
							find_year = re.search("\d\d\d\d\S", birthday).span()[1]
							birthday = birthday[:find_year-1] + " " + birthday[find_year+-1:]
						except:
							pass
						# sometimes even more fuckery
						try:
							letters_and_numbers_together = re.search("[a-zA-Z]\d", birthday).span()[1]
							birthday = birthday[:letters_and_numbers_together-1] + " " + birthday[letters_and_numbers_together-1:]
						except:
							pass
						birthday_date = search_dates(birthday,languages=['da'])[0][1]
						author_data['birth_date'] = birthday_date
					except:
						paragraphs = soup.find_all('p')
						for paragraph in paragraphs:
							text = paragraph.get_text()
							if '(født' in text:
								birth_mention = text.find('(født')
								birth_string = text[birth_mention+1:text.find(")",birth_mention)]
								if len(birth_string) < 10:	# just a year, probably
									year = int(birth_string[5:10])
									birthday = date(year,1,1)
									author_data['birth_date'] = birthday
								else:
									birthday_date = search_dates(birth_string,languages=['da'])[0][1]
									author_data['birth_date'] = birthday_date
								break
			# If not provided, try to get country
			if not author_data['country']:
				try:
					birthplace = soup.find('div', {'class':'birthplace'}).get_text()
				except:
					try:
						birthplace = soup.find('th', text="Born").parent.get_text()
					except:
						pass
				if birthplace:
					country = get_country(birthplace)
					if not country:
						try:
							birthplace = soup.find('th', text="Nationality").find_next_sibling().string
							country = get_country(birthplace)
						except:
							pass
				if country:
					author_data['country'] = country
					if not title_data['original_language']:
						if country == 'us' or country == 'sc' or contry == 'ir' or country == 'en' or country == 'au':
							country = 'en'
						title_data['original_language'] = country
		except:
			pass
	# GENDER
	if not author_data['gender']:
		request = requests.get('https://gender-api.com/get?name=' + author_data['first_name'] + '&key=vCjPrydWvlRcMxGszD')
		response = request.json()
		if response['gender'] == 'male':
			author_data['gender'] = 'm'
		elif response['gender'] == 'female':
			author_data['gender'] = 'f'
	if not author_data['data_quality']:
		if author_data['first_name'] and author_data['last_name'] and author_data['gender'] and author_data['country'] and author_data['birth_date'] and author_data['biography']:
			author_data['data_quality'] = 'med'
		else:
			author_data['data_quality'] = 'bad'
	# WIKIPEDIA ALTERNATIVE, ONLY FOR BOOKS READ IN DANISH
	if not author_data['biography'] and author_data['first_name'] and title_data['read_language'] == 'da':
		url = 'https://litteraturpriser.dk/henv/' + author_data['last_name'][0].lower() + '.htm'
		request = requests.get(url)
		soup = BeautifulSoup(request.text, "lxml")
		links = soup.find_all('a', href=True)
		for link in links:
			if len(link['href']) > 7:
				text = link.get_text().lower()
				if author_data['last_name'].lower() + ", " + author_data['first_name'].lower() == text:
					url = 'https://litteraturpriser.dk' + link['href']
					request = requests.get(url)
					soup = BeautifulSoup(request.text, "lxml")
					
					author_data['biography'] = request.url
					
					if not author_data['country']:
						author_data['country'] = 'da'
					
					if not author_data['birth_date']:
						born = soup.find(text=re.compile('Født'))
						if born:
							birthday_date = search_dates(born,languages=['da'])[0][1]
							author_data['birth_date'] = birthday_date
						else:
							born = soup.find(text=re.compile('f. '))
							birth_year = int(re.search("\d\d\d\d", born).group())
							author_data['birth_date'] = date(birth_year,1,1)
					if not title_data['original_language']:
						title_data['original_language'] = 'da'
					break
	
	
	return author_data, title_data

def get_ereolen(title_data, author_data):
	# EREOLEN
	soup = ""
	if not title_data['ereolen_url']:
		if title_data['isbn']:
			url = 'https://ereolen.dk/search/ting/' + title_data['isbn'] + '?&facets[]=facet.type%3Aebog'
		else:
			url = 'https://ereolen.dk/search/ting/' + author_data['first_name'] + " " + author_data['last_name']+ " " + title_data['title'] + '?&facets[]=facet.type%3Aebog'
		request = requests.get(url)
		try:
			search_soup = BeautifulSoup(request.text, "lxml")
			links = [a['href'] for a in search_soup.find_all('a', href=True) if '/collection/' in a['href']]
			book_request = requests.get('https://ereolen.dk' + links[0])
			soup = BeautifulSoup(book_request.text, "lxml")
			
			links = [a['href'] for a in soup.find_all('a', href=True) if '/object/' in a['href']]
			# ebooks and audiobook versions
			if len(links) == 4:
				book_request = requests.get('https://ereolen.dk' + links[0])
				soup = BeautifulSoup(book_request.text, "lxml")
			# SAVE HIT URL
			title_data['ereolen_url'] = 'https://ereolen.dk' + links[0]
		except:
			pass
	else:
		book_request = title_data['ereolen_url']
		book_request = requests.get(book_request)
		soup = BeautifulSoup(book_request.text, "lxml")

	if soup:
		if not title_data['published_date']:
			try:
				published = soup.find('div', class_={"field-name-ting-author"}).get_text()
				published = int(re.search("[(]\d\d\d\d[)]", published).group()[1:5])
				title_data['published_date'] = date(published,1,1)
			except:
				pass
		
		if not title_data['isbn']:
			try:
				isbn_tag = soup.find('div', class_={"field-name-ting-details-isbn"}) 
				title_data['isbn'] = isbn_tag.find('div', class_={"field-items"}).get_text()
			except:
				pass
		
		if not title_data['publisher']:
			try:
				publisher_tag = soup.find('div', class_={"field-name-ting-details-publisher"}) 
				title_data['publisher'] = publisher_tag.find('div', class_={"field-items"}).get_text()
			except:
				pass
		
		if not title_data['pages']:
			try:
				page_tag = soup.find('div', class_={"field-name-ting-details-extent"}) 
				title_data['pages'] = int(page_tag.find('div', class_={"field-items"}).get_text().replace(" sider",""))
			except:
				pass
		
		if not title_data['original_title']:
			try:
				original_title_tag = soup.find('div', class_={"field-name-ting-details-source"}) 
				title_data['original_title'] = original_title_tag.find('div', class_={"field-items"}).get_text()
			except:
				pass					
		
		if not title_data['cover_url']:
			covers = [img['src'] for img in soup.find_all('img') if '/covers/' in img['src']]
			title_data['cover_url'] = covers[0][:covers[0].find("?")]
	return title_data, author_data

def get_bibliotek_dk(title_data, author_data):
	search_url = 'https://bibliotek.dk/da/search/work?search_block_form=phrase.creator%3D%22' + author_data['first_name'] + " " + author_data['last_name'] + '%22+and+phrase.title%3D%22' + title_data['title'] + '%22&select_material_type=bibdk_frontpage&op=S%C3%B8g&n%2Famaterialetype%5Bterm.workType%253D%2522literature%2522%5D=term.workType%253D%2522literature%2522&year_op=%2522year_eq%2522&year_value=&form_id=search_block_form&sort=rank_main_title&page_id=bibdk_frontpage'
	
	request = requests.get(search_url)
	soup = BeautifulSoup(request.text, "lxml")
	hits = soup.find_all('div', {'class':'work mobile-page'})
	if not hits:
		url = 'https://bibliotek.dk/da/search/work?search_block_form=' + author_data['first_name'] + " " + author_data['last_name'] + " " + title_data['title'] +'&select_material_type=bibdk_frontpage%2Fbog&op=S%C3%B8g&n%2Famaterialetype%5Bterm.workType%253D%2522literature%2522%5D=term.workType%253D%2522literature%2522&year_op=%2522year_eq%2522&year_value=&form_build_id=form-TQ8TlT3HGFiKXyvz6cCFaiuTMZKimuHMF-p4q1Mb8ZI&form_id=search_block_form&sort=rank_main_title&page_id=bibdk_frontpage#content'
		request = requests.get(url)
		soup = BeautifulSoup(request.text, "lxml")
		hits = soup.find_all('div', {'class':'work mobile-page'})
	
	for hit in hits:
		id = hit['id']
		title = hit.find('h2', {'class':'searchresult-work-title'}).get_text()
		author = hit.h3.get_text()
		
		if title_data['title'].lower() in title.lower() or title.lower() in title_data['title'].lower() or len(hits) == 1:
			
			if 'basis' in id:
				link = id.replace("basis","-basis:")
			elif 'katalog' in id:
				link = id.replace("katalog","-katalog:")
			biblo_url = 'https://bibliotek.dk/da/work/' + link
			
			request = requests.get(biblo_url)
			
			if not title_data['biblo_dk_url']:
				title_data['biblo_dk_url'] = biblo_url
			
			soup = BeautifulSoup(request.text, "lxml")
			
			if not title_data['cover_url']:
				try:
					img = soup.find('div', {'class':'bibdk-cover'}).img['src'].replace("/medium/","/large/")
					img = img[:img.find("?")]
					title_data['cover_url'] = img
				except:
					pass
			
			book_data = soup.find('div', {'class':'manifestation-data'})
			
			if not title_data['pages']:
				try:
					pages = book_data.find('div', {'class':'field-name-bibdk-mani-format'}).find('span', {'class':'openformat-field'}).string.strip()
					pages = pages[:pages.find(" ")]
					pages = int(pages)
					title_data['pages'] = pages
				except:
					pass
			if not title_data['publisher']:
				try:
					publisher = book_data.find('div', {'class':'field-name-bibdk-mani-publisher'}).find('span', {'property':'name'}).string
					title_data['publisher'] = publisher
				except:
					pass
			
			if not title_data['published_date'] or not title_data['first_published']:
				try:
					first_published = book_data.find('div', {'class':'field-name-bibdk-mani-originals'}).find('span', {'class':'openformat-field'}).string.strip()
					published = int(re.search("\d\d\d\d", first_published).group())
					if not title_data['published_date']:
						title_data['published_date'] = date(published,1,1)
					if not title_data['first_published'] and title_data['read_language'] == 'da' and title_data['original_language'] == 'da':
						title_data['first_published'] = date(published,1,1)
				except:
					try:
						pub_year = int(book_data.find('div', {'class':'field-name-bibdk-mani-pub-year'}).find('span', {'class':'openformat-field'}).string.strip())
						title_data['published_date'] = date(pub_year,1,1)
						if title_data['read_language'] == 'da' and title_data['original_language'] == 'da':
							try:
								edition = book_data.find('div', {'class':'field-name-bibdk-mani-edition'}).find('span', {'class':'openformat-field'}).string.strip()
								if edition == "1. udgave":
									title_data['first_published'] = date(pub_year,1,1)
							except:
								pass
					except:
						pass
				break
	return title_data, author_data

def get_goodreads(title_data, author_data):
	if not title_data['good_reads_url']:
		searchterm = author_data['first_name'] + " " + author_data['last_name'] + " " + title_data['title']
		search_url = 'https://www.goodreads.com/search?utf8=✓&q=' + searchterm + '&search_type=books'
		response = requests.get(search_url)
		search_soup = BeautifulSoup(response.text, "lxml")
		all_results = search_soup.find_all('tr', {'itemtype':'http://schema.org/Book'})
		if not all_results:
			search_url = 'https://www.goodreads.com/search?utf8=✓&q=' + title_data['title'] + '&search_type=books'
			response = requests.get(search_url)
			search_soup = BeautifulSoup(response.text, "lxml")
			all_results = search_soup.find_all('tr', {'itemtype':'http://schema.org/Book'})
		if all_results:
			good_match = False
			#exact match
			for result in all_results:
				gr_author = result.find('span', {'itemprop':'author'}).get_text().strip()
				gr_author = gr_author.replace(' (Goodreads Author)','')
				if "   " in gr_author:
					gr_author = gr_author.replace("   "," ")
				elif "  " in gr_author:
					gr_author = gr_author.replace("  "," ")
				gr_title = result.find('a', {'class':'bookTitle'})
				gr_title_string = gr_title.get_text().strip()
				title_url = gr_title['href']
				if gr_title_string.lower() == title_data['title'].lower() and gr_author.lower() == author_data['first_name'].lower() + " " + author_data['last_name'].lower():
					good_match = True
					break
			if good_match == True:
				url = 'https://www.goodreads.com' + title_url
				response = requests.get(url)
				soup = BeautifulSoup(response.text, "lxml")
			else:
				links = search_soup.find_all('a', href=True)
				books = [a['href'] for a in links if '/book/show/' in a['href']]
				for book in books:
					if not 'summary' in book and not 'analysis' in book and not 'lesson-plan' in book and not 'sidekick' in book and not 'teaching-with' in book and not 'study-guide' in book and not 'quicklet' in book and not 'lit-crit' in book and not author_data['last_name'].lower() in book:
						url = 'https://www.goodreads.com' + book
						response = requests.get(url)
						soup = BeautifulSoup(response.text, "lxml")
						heading = soup.find('h1', {'id': 'bookTitle'}).string
						break
	else:
		url = title_data['good_reads_url']
		response = requests.get(url)
		soup = BeautifulSoup(response.text, "lxml")

	
	if not title_data['good_reads_url']:
		if '?' in url:
			url = url[:url.rfind("?")]
		title_data['good_reads_url'] = url

	if not title_data['cover_url']:
		try:
			title_data['cover_url'] = soup.find('img', {"id" : "coverImage"})['src'].replace("compressed.","")
		except:
			pass
	
	
	details = soup.find('div', {"id" : "details"})
	details_text = details.get_text()
	
	if not title_data['published_date']:
		possible_dates = details.find_all('div', attrs={'class':'row'})
		for item in possible_dates:
			published_date = item.find(text=re.compile("Published"))
			if published_date:
				published_date = published_date.strip()
				numbers = numbers_in_string(published_date)
				if numbers > 4:
					title_data['published_date'] = search_dates(published_date,languages=['en'])[0][1]
				elif numbers == 4:
					year = int(re.search("\d\d\d\d", published_date).group())
					title_data['published_date'] = date(year,1,1)
	
	if not title_data['first_published']:
		try:
			first_published = details.find('nobr').string.strip()
			numbers = numbers_in_string(first_published)
			if numbers > 4:
				title_data['first_published'] = search_dates(first_published,languages=['en'])[0][1]
			elif numbers == 4:
				year = int(re.search("\d\d\d\d", first_published).group())
				title_data['first_published'] = date(year,1,1)			
		except:
			pass
	
	if not title_data['pages']:
		try:
			pages = details.find('span', {'itemprop': 'numberOfPages'}).string
			title_data['pages'] = int(pages[:pages.find(" ")])
		except:
			pass

	if not title_data['publisher']:
		try:
			by_location = details_text.find("by ")
			title_data['publisher'] = details_text[by_location+3:details_text.find("\n", by_location)]
		except:
			pass
	
	if not title_data['isbn']:
		try:
			isbn = re.search("\d\d\d\d\d\d\d\d\d\d\d\d\d", details_text).group()
			title_data['isbn'] = isbn
		except:
			try:
				isbn = re.search("\d\d\d\d\d\d\d\d\d\d", details_text).group()
				title_data['isbn'] = isbn
			except:
				pass

	if not title_data['original_title'] and title_data['read_language'] != title_data['original_language']:
		try:
			parent = details.find('div', text="Original Title").parent
			original_title = parent.find('div', {'class':'infoBoxRowItem'}).string
			title_data['original_title'] = original_title
		except:
			pass

	return title_data, author_data
	
def get_title(title_data, author_data):
	if title_data['read_language'] == 'da':
		title_data, author_data = get_ereolen(title_data, author_data)
		title_data, author_data = get_bibliotek_dk(title_data, author_data)
		title_data, author_data = get_goodreads(title_data, author_data)
		#cover from ereolen, mofibo, saxo
		# danish library request
	else:
		title_data, author_data = get_goodreads(title_data, author_data)
	return title_data, author_data

The template

The simplicity:

<h1>Add book</h1>

{% if book_saved %}
	<p>Bogen blev gemt!</p>
{% endif %}	

<form method="post">
<p class="center"><input class="button blue" name="lookup" type="submit" value="Look up">
<input class="button green" name="save" type="submit" value="Save"></p>

<p class="center">
{% if author_form.biography.value %}
	<a href="{{ author_form.biography.value }}">biografi</a>
{% endif %}

{% if title_form.good_reads_url.value %}
	<a href="{{ title_form.good_reads_url.value }}">goodreads</a>
{% endif %}

{% if title_form.ereolen_url.value %}
	<a href="{{ title_form.ereolen_url.value }}">ereolen</a>
{% endif %}

{% if title_form.biblo_dk_url.value %}
	<a href="{{ title_form.biblo_dk_url.value }}">bibliotek.dk</a>
{% endif %}
</p>

{% csrf_token %}
<div class="grid addbook">
	<div>
		{{ author_form }}
	</div>
	
	<div>
		{{ title_form }}
	</div>
	<div>
		{{ read_form }}
		{% if title_form.cover_url.value %}
		<img class="cover" src="{{ title_form.cover_url.value }}">
		{% endif %}
	</div>
</div>
</form>

The data model

Here’s models.py with the embarrassing list of countries and languages (that I should have gotten from somewhere else) edited out:

from isbn_field import ISBNField

class Author(models.Model):
	GENDER_CHOICES = [
		('f', 'Female'),
		('m', 'Male'),
		('o', 'Other'),
	]

	DATA_QUALITY_CHOICES = [
		('good', 'Good'),
		('bad', 'Bad'),
		('med', 'Medium'),
	]

	first_name = models.CharField('First name', max_length=500, blank=True)
	last_name = models.CharField('Last name', max_length=500)
	def __str__(self):
		return self.first_name + " " + self.last_name
	def get_titles(self):
		return " & ".join([t.title for t in self.title_set.all()])
	gender = models.CharField('Gender', choices=GENDER_CHOICES, max_length=1, blank=True)
	birth_date = models.DateField(null=True, blank=True)
	country = models.CharField('Country', choices=COUNTRY_CHOICES, max_length=2, blank=True)
	biography = models.URLField('Biography url', max_length=500, blank=True) 
	data_quality = models.CharField('Datakvalitet', choices=DATA_QUALITY_CHOICES, max_length=4, blank=True)
	
	class Meta:
		ordering = ['last_name']
	
class Title(models.Model):
	GENRE_CHOICES = [
		('nf', 'Non-Fiction'),
		('fi', 'Fiction'),
	]

	authors = models.ManyToManyField(Author)
	def get_authors(self):
		return " & ".join([t.first_name + " " + t.last_name for t in self.authors.all()])
	get_authors.short_description = "Author(s)"	
	title = models.CharField('Title', max_length=500)
	def __str__(self):
		return self.title
	read_language = models.CharField('Read in language', choices=LANGUAGE_CHOICES, max_length=2)
	original_language = models.CharField('Original language', choices=LANGUAGE_CHOICES, max_length=2, blank=True)
	original_title = models.CharField('Original title', max_length=500, blank=True)
	genre = models.CharField('Overall genre', choices=GENRE_CHOICES, max_length=2)
	publisher = models.CharField('Publisher', max_length=100, blank=True)
	first_published = models.DateField(null=True, blank=True)
	published_date = models.DateField(null=True, blank=True)
	isbn = ISBNField(null=True, blank=True)
	cover_filename = models.CharField('Cover filename', max_length=100, blank=True)
	cover_url = models.URLField('Cover-url', max_length=500, blank=True)
	pages = models.PositiveIntegerField(blank=True, null=True)
	good_reads_url = models.URLField('Goodreads-url', max_length=500, blank=True)
	ereolen_url = models.URLField('Ereolen-url', max_length=500, blank=True)
	biblo_dk_url = models.URLField('Biblo-url', max_length=500, blank=True)

	class Meta:
		ordering = ['title']

class Read(models.Model):
	title = models.ForeignKey(Title, on_delete=models.CASCADE)
	date = models.DateField()
	sort_order = models.PositiveIntegerField(blank=True, null=True)

The front page

The views.py function for the front page is short and sweet:

def index(request):
	context = {}
	context['request'] = request
	reads = Read.objects.order_by('-date__year', 'date__month','sort_order','id').select_related('title')
	context['reads'] = reads
	context['months'] = [[i, calendar.month_abbr[i]] for i in range(1,13)]
	return render(request, 'books/index.html', context)

And, while longer, I think the template loop is nice too, (although there is that clumsy nested loop):

{% regroup reads by date.year as years_list %}

{% for year, readings in years_list %}
	<h2>{{ year }}</h2>
	{% if year == 2015 %}
		<p>I was on paternity leave most of this year which gave me time to read a lot, but not the mental surplus to register by month. This year I bought a Kindle which re-kindled (durr) my interest in reading.</p>
	{% elif year == 2004 %}
		<p>I was working in England from around September 2003 to February 2004. This gave me time to read a lot, but not the computer access at home necessary to register my reads precisely.</p>
	{% elif year == 2003 %}
		<p>The year I began registering my reads.</p>
	{% elif year == 2002 %}	
		<p>This - and all years before - is from memory in 2003, so not really precise.</p>
	{% endif %}
	
	{% regroup readings by date.month as months_list %}
	
	{% if year > 2004 and not year == 2015 %}
		<div class="grid reads">
			{% for month in months %}
				<div class="flex">
					<div>{{ month.1 }}</div>
					{% for mon, reads in months_list %}
						{% if mon == month.0 %}
							{% for read in reads %}
								<a title="{{ read.title }}" href="{% url 'books_book' read.title.id %}"><img class="frontcover" loading="lazy" src="{% static 'books/covers/150/' %}{{ read.title.id }}.webp"></a>
							{% endfor %}
						{% endif %}
					{% endfor %}
				</div>
			{% endfor %}
		</div>
	{% else %}
		{% for read in readings %}
			<a href="{% url 'books_book' read.title.id %}"><img class="frontcover" loading="lazy" src="{% static 'books/covers/150/' %}{{ read.title.id }}.webp"></a>
		{% endfor %}
	{% endif %}

The statistics page

The charts on the statistics page are made with Chart.js which is so easy that you don’t even need to know Javascript.

Here’s the views.py function which could probably be sped up if I had any idea how (which I don’t):

def statistics(request):
	context = {}
	
	# All reads, used for lots of charts
	reads = Read.objects.order_by('date__year').select_related('title').prefetch_related('title__authors')
	context['reads'] = reads
	
	# Books per year chart queryset
	books_pages_per_year = Read.objects.values('date__year').annotate(Count('id'), Sum('title__pages'), Avg('title__pages')).order_by('date__year')
	context['books_pages_per_year'] = books_pages_per_year
	
	# Prepare year, value-dictionaries
	genre_structure = {}	# fiction vs. non-fiction
	author_gender_structure = {}	# male vs. female
	author_birth_structure = {}	# median age of authors
	read_language_structure = {} # language of read
	original_language_structure = {} # original language of read
	language_choices = dict(Title.LANGUAGE_CHOICES)	# look up dict for original languages
	author_country_structure = {} # country of author
	country_choices = dict(Author.COUNTRY_CHOICES)
	book_age_structure = {} # median age of books

	for read in reads:
		year_of_read = read.date.year
		# Put year keys in dictionaries
		if not year_of_read in genre_structure:	# check one = check all
			genre_structure[year_of_read] = []
			author_gender_structure[year_of_read] = []
			author_birth_structure[year_of_read] = []
			read_language_structure[year_of_read] = []
			original_language_structure[year_of_read] = []
			author_country_structure[year_of_read] = []
			book_age_structure[year_of_read] = []
		
		# Put values in dictionaries
		if read.title.read_language == 'da' or read.title.read_language == 'en':
			read_language_structure[year_of_read].append(read.title.read_language)
		
		if read.title.original_language:
			original_language_structure[year_of_read].append(language_choices[read.title.original_language])
		
		if read.title.genre:
			genre_structure[year_of_read].append(read.title.genre)
		
		if read.title.first_published:
			book_age_structure[year_of_read].append(read.title.first_published.year)
		
		for author in read.title.authors.all():
			if author.gender:
				author_gender_structure[year_of_read].append(author.gender)
			if author.birth_date:
				author_birth_structure[year_of_read].append(author.birth_date.year)
			if author.country:
				author_country_structure[year_of_read].append(country_choices[author.country])
		
	# Prepare datasets for charts
	genres = {}
	for year, genre_list in genre_structure.items():
		number_of_titles = len(genre_list)
		number_of_fiction_titles = sum(1 for genre in genre_list if genre == 'fi')
		fiction_percentage = int(number_of_fiction_titles/number_of_titles*100)
		non_fiction_percentage = 100 - fiction_percentage
		genres[year] = [fiction_percentage, non_fiction_percentage]
	context['genres'] = genres
	
	median_author_age = {}
	for year, birthyears in author_birth_structure.items():
		birthyears = sorted(birthyears)
		median_birthyear = birthyears[len(birthyears) // 2]
		median_author_age[year] = year - median_birthyear
	context['median_author_age'] = median_author_age
				
	author_genders = {}
	for year, genders in author_gender_structure.items():
		number_of_authors = len(genders)
		males = sum(1 for gender in genders if gender == 'm')
		male_percentage = int(males/number_of_authors*100)
		female_percentage = 100 - male_percentage
		author_genders[year] = [male_percentage, female_percentage]
	context['author_genders'] = author_genders
	
	read_languages = {}
	for year, languages in read_language_structure.items():
		number_of_languages = len(languages)
		danish = sum(1 for language in languages if language == 'da')
		danish_percentage = int(danish / number_of_languages * 100)
		english_percentage = 100 - danish_percentage
		read_languages[year] = [danish_percentage, english_percentage]
	context['read_languages'] = read_languages
	
	original_languages = []
	original_languages_years = []
	for year, languages in original_language_structure.items():
		if not year in original_languages_years:
			original_languages_years.append(year)
		for lang in languages:
			if lang not in original_languages:
				original_languages.append(lang)
	original_languages_template = {}
	for language in original_languages:
		original_languages_template[language] = []
		for year in original_languages_years:
			count_of_language_in_year = sum(1 for lang in original_language_structure[year] if language == lang)
			original_languages_template[language].append(count_of_language_in_year)
	context['original_languages_template'] = original_languages_template
	context['original_languages_years'] = original_languages_years

	author_countries = []
	author_countries_years = []
	for year, countries in author_country_structure.items():
		if not year in author_countries_years:
			author_countries_years.append(year)
		for country in countries:
			if country not in author_countries:
				author_countries.append(country)
	author_countries_template = {}
	for country in author_countries:
		author_countries_template[country] = []
		for year in author_countries_years:
			count_of_country_in_year = sum(1 for countr in author_country_structure[year] if country == countr)
			author_countries_template[country].append(count_of_country_in_year)
	context['author_countries_template'] = author_countries_template
	context['author_countries_years'] = author_countries_years			

	median_book_age = {}
	for year, publish_years in book_age_structure.items():
		publish_years = sorted(publish_years)
		# account for no data in years
		if len(publish_years) >= 2:
			median_publish_year = publish_years[len(publish_years) // 2]
		elif len(publish_years) == 1:
			median_publish_year = publish_years[0]
		else:
			median_publish_year = 0
		median_book_age[year] = year - median_publish_year
	context['median_book_age'] = median_book_age
	
	return render(request, 'books/statistics.html', context)

And a template example:

<div>
	<h2>Reads per year</a>
	<canvas id="books_per_year"></canvas>
</div>

<script>
var ctx = document.getElementById('books_per_year').getContext('2d');
var myChart = new Chart(ctx, {
	type: 'bar',
	data: {
		labels: [{% for year in books_pages_per_year %}{% if not forloop.last %}{{ year.date__year }}, {% else %}{{ year.date__year }}{% endif %}{% endfor %}],
		datasets: [{
			label: 'Read',
			data: [{% for year in books_pages_per_year %}{% if not forloop.last %}{{ year.id__count }}, {% else %}{{ year.id__count }}{% endif %}{% endfor %}],
			backgroundColor: 'rgba(255, 99, 132, 0.2)',
			borderColor: 'rgba(255, 99, 132, 1)',
			borderWidth: 1
		}]
	},
	options: {
		tooltips: {
			callbacks: {
				label: function(tooltipItem, data) {
					return data.datasets[tooltipItem.datasetIndex].label + ': ' + tooltipItem.value + ' books';
				}
			}
		},
		legend: {
			display: false
		},
		responsive: true,
		scales: {
			yAxes: [{
				ticks: {
					beginAtZero: true
				}
			}]
		}
	}
});
</script>

THANK YOU FOR SKIMMING

Wallnots Twitterbot, version 3

Wallnots Twitter-bot finder delte artikler fra Politiken og Zetland på Twitter og deler dem med verden. Det fungerer sådan her:

# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com

import requests
from bs4 import BeautifulSoup
from datetime import datetime
from datetime import date
from datetime import timedelta
import json
import time
import random
from TwitterAPI import TwitterAPI
from nested_lookup import nested_lookup

# CONFIGURATION #
# List to store articles to post to Twitter
articlestopost = []

# Search tweets from last 3 hours
now = datetime.utcnow()
since_hours = 3
since = now - timedelta(hours=since_hours)
since_string = since.strftime("%Y-%m-%dT%H:%M:%SZ")

# Search configuration
# https://developer.twitter.com/en/docs/twitter-api/tweets/search/api-reference/get-tweets-search-recent
# https://github.com/twitterdev/Twitter-API-v2-sample-code/tree/master/Recent-Search
tweet_fields = "tweet.fields=entities"
media_fields = "media.fields=url"
max_results = "max_results=100"
start_time = "start_time=" + since_string

# Twitter API login
client_key = ''
client_secret = ''
access_token = ''
access_secret = ''
api = TwitterAPI(client_key, client_secret, access_token, access_secret)

bearer_token = ''

# POLITIKEN #
# Run search
query = 'politiken.dk/del'

url = "https://api.twitter.com/2/tweets/search/recent?query={}&{}&{}&{}&{}".format(
	query, tweet_fields, media_fields, max_results, start_time
)
headers = {"Authorization": "Bearer {}".format(bearer_token)}
response = requests.request("GET", url, headers=headers)
json_response = response.json()

urllist = list(set(nested_lookup('expanded_url', json_response)))

# Only proces urls that were not in our last Twitter query
proceslist = []
with open("./pol_lastbatch.json", "r", encoding="utf8") as fin:
	lastbatch = list(json.load(fin))
	for url in urllist:
		if url not in lastbatch and query in url:
			proceslist.append(url)
# Save current query to use for next time
with open("./pol_lastbatch.json", "wt", encoding="utf8") as fout:
	lastbatch = json.dumps(urllist)
	fout.write(lastbatch)

# Request articles and get titles and dates and sort by dates
articlelist = []

pol_therewasanerror = False
for url in proceslist:
	try:
		if 'https://www.google.com' in url:
			start = url.find('url=')+4
			end = url.find('&', start)
			url = url[start:end]	
		if not len(url) == 37:
			url = url[:37]
		data = requests.get(url)
		result = data.text
		if '"isAccessibleForFree": "True"' not in result:
			realurl = data.history[0].headers['Location']
			if not "/article" in realurl and not ".ece" in realurl:
				start_of_unique_id = realurl.index("/art")+1
				end_of_unique_id = realurl[start_of_unique_id:].index("/")
				unique_id = realurl[start_of_unique_id:start_of_unique_id+end_of_unique_id]
			elif "/article"	in realurl and ".ece" in realurl:
				start_of_unique_id = realurl.index("/article")+1
				end_of_unique_id = realurl[start_of_unique_id:].index(".ece")
				unique_id = realurl[start_of_unique_id:start_of_unique_id+end_of_unique_id]
			articlelist.append({"id": unique_id, "url": url})
	except Exception as e:
		print(url)
		print(e)
		pol_therewasanerror = True

#If something fails, we'll process everything again next time			
if pol_therewasanerror == True:
	with open("./pol_lastbatch.json", "wt", encoding="utf8") as fout:
		urllist = []
		lastbatch = json.dumps(urllist)
		fout.write(lastbatch)
	
# Check if article is already posted and update list of posted articles
with open("./pol_published_v2.json", "r", encoding="utf8") as fin:
	alreadypublished = list(json.load(fin))
	# File below used for paywall.py to update wallnot.dk
	for article in articlelist:
		hasbeenpublished = False
		for published_article in alreadypublished:
			if article['id'] == published_article['id']:
				hasbeenpublished = True
				break
		if hasbeenpublished == False:
			alreadypublished.append(article)
			articlestopost.append(article)
	# Save updated already published links
	with open("./pol_published_v2.json", "wt", encoding="utf8") as fout:
		alreadypublishedjson = json.dumps(alreadypublished)
		fout.write(alreadypublishedjson)

# ZETLAND #
# Run search
query = 'zetland.dk/historie'

url = "https://api.twitter.com/2/tweets/search/recent?query={}&{}&{}&{}&{}".format(
	query, tweet_fields, media_fields, max_results, start_time
)
headers = {"Authorization": "Bearer {}".format(bearer_token)}
response = requests.request("GET", url, headers=headers)
json_response = response.json()

urllist = list(set(nested_lookup('expanded_url', json_response)))

# Only proces urls that were not in our last Twitter query
proceslist = []
with open("./zet_lastbatch.json", "r", encoding="utf8") as fin:
	lastbatch = list(json.load(fin))
	for url in urllist:
		if url not in lastbatch and query in url:
			proceslist.append(url)
# Save current query to use for next time
with open("./zet_lastbatch.json", "wt", encoding="utf8") as fout:
	lastbatch = json.dumps(urllist)
	fout.write(lastbatch)

# Request articles and get titles and dates and sort by dates
articlelist = []
titlecheck = []

zet_therewasanerror = False
for url in proceslist:
	try:
		if 'https://www.google.com' in url:
			start = url.find('url=')+4
			end = url.find('&', start)
			url = url[start:end]		
		data = requests.get(url)
		result = data.text
		soup = BeautifulSoup(result, "lxml")
		title = soup.find('meta', attrs={'property':'og:title'})
		title = title['content']
		timestamp = soup.find('meta', attrs={'property':'article:published_time'})
		timestamp = timestamp['content']
		timestamp = timestamp[:timestamp.find("+")]
		dateofarticle = datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S.%f')
		if title not in titlecheck:
			articlelist.append({"title": title, "url": url, "date": dateofarticle})
			titlecheck.append(title)
	except Exception as e:
		print(url)
		print(e)
		zet_therewasanerror = True

#If something fails, we'll process everything again next time
if zet_therewasanerror == True:
	with open("./zet_lastbatch.json", "wt", encoding="utf8") as fout:
		urllist = []
		lastbatch = json.dumps(urllist)
		fout.write(lastbatch)


			
articlelist_sorted = sorted(articlelist, key=lambda k: k['date']) 

# Check if article is already posted and update list of posted articles
with open("./zet_published.json", "r", encoding="utf8") as fin:
	alreadypublished = list(json.load(fin))
	for art in articlelist_sorted:
		title = art['title']
		if title not in alreadypublished:
			alreadypublished.append(title)
			articlestopost.append(art)
	# Save updated already published links
	with open("./zet_published.json", "wt", encoding="utf8") as fout:
		alreadypublishedjson = json.dumps(alreadypublished, ensure_ascii=False)
		fout.write(alreadypublishedjson)


# POST TO TWITTER AND FACEBOOK#
friendlyterms = ["flink","rar","gavmild","velinformeret","intelligent","sød","afholdt","bedårende","betagende","folkekær","godhjertet","henrivende","smagfuld","tækkelig","hjertensgod","graciøs","galant","tiltalende","prægtig","kær","godartet","human","indtagende","fortryllende","nydelig","venlig","udsøgt","klog","kompetent","dygtig","ejegod","afholdt","omsorgsfuld","elskværdig","prægtig","skattet","feteret"]
enjoyterms = ["God fornøjelse!", "Nyd den!", "Enjoy!", "God læsning!", "Interessant!", "Spændende!", "Vidunderligt!", "Fantastisk!", "Velsignet!", "Glæd dig!", "Læs den!", "Godt arbejde!", "Wauv!"]

if articlestopost:
	for art in articlestopost:
		if "zetland" in art['url']:
			medium = "@ZetlandMagasin"
		else:
			medium = "@politiken"
		friendlyterm = random.choice(friendlyterms)
		enjoyterm = random.choice(enjoyterms)
		status = "En " + friendlyterm + " abonnent på " + medium + " har delt en artikel. " + enjoyterm
		twitterstatus = status + " " + art['url']
		try:
			twitterupdate = api.request('statuses/update', {'status': twitterstatus})
		except Exception as e:
			print(e)
		time.sleep(15)

Basal billedbehandling i Python

Efter at have hentet en masse flotte fotografier fra internettet, havde jeg brug for lidt grovsortering. Jeg ville fjerne de fotos, der havde for lav opløsning til, at jeg gad kigge på dem og måske på et senere tidspunkt printe dem ud.

Med Python-biblioteket Pillow kunne jeg nemt tygge mine fotos igennem.

Jeg bruger først “walk”-funktionaliteten fra biblioteket os, som lader mig lave en løkke-funktion gennem alle mapper, undermapper og filer fra et sted på min harddisk.

Derefter bruger jeg Pillow til at hente størrelsen på hver led af hvert foto og udregner arealet. Hvis et foto er lig med eller større end 3 megapixels (3 millioner pixels) beholder jeg det. Er det mindre, sletter jeg det.

Her kan du se, hvordan jeg gjorde:

# megapixels.py
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com
'''A program to go through a directory and subdirectories and delete
image files below a certain megapixel size.'''

import os						# Used to create directories at local destination
from PIL import Image
import PIL

save_location = "C:/Downloads/"
contents = os.walk(save_location)

for root, directories, files in contents:
	for file in files:
		location = os.path.join(root,file)
		if not ".py" in file:
			try:
				image = Image.open(location)
				area = image.size[0]*image.size[1]
				if area >= 3000000:
					print("stort", location)
					image.close()
				else:
					print("for lille", location)
					image.close()
					os.remove(location)
			except PIL.UnidentifiedImageError:
				if ".jpg" in file or ".png" in file or ".jpeg" in file or ".tif" in file:
					print("deleting:", location)
					os.remove(location)
			except PIL.Image.DecompressionBombError:
				pass

En crawler til mappe-visninger på nettet

Hvis du har været på internettet, er du sikkert en gang stødt på sådan ét her:

Mange webadministratorer vælger at skjule disse oversigter over filer på en webserver, som webserversoftwaren Apache kan generere automatisk.

Men jeg opdagede ved et tilfælde, at jeg kunne se, hvad fotoagenturet Magnum havde lagt op i deres WordPress-installation.

Jeg besluttede at forsøge at lave en lokal kopi, så jeg kunne kigge på flotte fotografier uden at skulle vente på downloads fra internettet.

Først forsøgte jeg med Wget, som er et lille program, der er designet til at dublere websteder lokalt. Men Wget havde problemer med at hente og tygge sig igennem de lange lister med filer. En af dem fyldte fx 36 megabytes. Det er altså rigtig mange links.

Derfor lavede jeg et lille Python-program, der kan tygge sig igennem denne type mappe- og filoversigter og downloade dem lokalt.

Her er det:

# apache-directory-downloader.py
# Author: Morten Helmstedt. E-mail: helmstedt@gmail.com
'''A program to fetch files from standard apache directory listings on the internet.
See https://duckduckgo.com/?t=ffab&q=apache%2Bdirectory%2Blisting&ia=images&iax=images
for examples of what this is.'''

import requests					# Send http requests and receive responses
from bs4 import BeautifulSoup	# Parse HTML data structures, e.g. to search for links
import os						# Used to create directories at local destination
import shutil					# Used to copy binary files from http response to local destination
import re						# Regex parser and search functions

# Terms to exclude, files with these strings in them are not downloaded
exclude = [
	"-medium",
	"-overlay",
	"-teaser-",
	"-overlay",
	"-thumbnail",
	"-collaboration",
	"-scaled",
	"-photographer-featured",
	"-photographer-listing",
	"-full-on-mobile",
	"-theme-small-teaser",
	"-post",
	"-large",
	"-breaker",
	]

# Takes an url and collects all links
def request(url, save_location):
	# Print status to let user know that something is going on
	print("Requesting:", url)
	# Fetch url
	response = requests.get(url)
	# Parse response
	soup = BeautifulSoup(response.text, "lxml")
	# Search for all links and exclude certain strings and patterns from links
	urllist = [a['href'] for a in soup.find_all('a', href=True) if not '?C=' in a['href'] and not a['href'][0] == "/" and not any(term in a['href'] for term in exclude) and not re.search("\d\d[x]\d\d",a['href'])]
	# If status code is not 200 (OK), add url to list of errors
	if not response.status_code == 200:
		errorlist.append(url)
	# Send current url, list of links and current local save collection to scrape function
	return scrape(url, urllist, save_location)

def scrape(path, content, save_location):
	# Loop through all links
	for url in content:
		# Print status to let user know that something is going on
		print("Parsing/downloading:", path+url)
		# If there's a slash ("/") in the link, it is a directory
		if "/" in url:
			# Create local directory if it doesn't exists
			try:
				os.mkdir(save_location+url)
			except:
				pass
			# Run request function to fetch contents of directory
			request(path+url, save_location+url)
		# If the link doesn't contain a slash, it's a file and is saved
		else:
			# Check if file already exists, e.g. has been downloaded in a prior run
			if not os.path.isfile(save_location+url):
				# If file doesn't exist, fetch it from remote location
				file = requests.get(path+url, stream=True)
				# Print status to let user know that something is going on
				print("Saving file:", save_location+url)
				# Save file to local destination
				with open(save_location+url, 'wb') as f:
					# Decodes file if received compressed from server
					file.raw.decode_content = True
					# Copies binary file to local destination
					shutil.copyfileobj(file.raw, f)

# List to collect crawling errors
errorlist = []
# Local destination, e.g. 'C:\Downloads' for Windows
save_location = "C:/Downloads/"
# Remote location, e.g. https://example.com/files
url = "https://content.magnumphotos.com/wp-content/uploads/"
# Call function to start crawling
request(url, save_location)
# Print any crawling errors
print(errorlist) 

Min tur i manegen med Copyright Agent

Måske har du hørt om den usympatiske faktureringsfabrik Copyright Agent, som med manglende forståelse for, hvordan websider og links virker, sender uberettigede skræmmebrevsfakturaer til private og små foreninger?

Det har fx politikeren Pelle Dragsted, politikeren Mette Abildgaard og iværksætteren Martin Thorborg i hvert fald. Måske ikke lige den slags mennesker, man forestiller sig er ude på at tage brødet ud af munden på hårdarbejdende kreative?

Det er jeg heller ikke, men alligevel har jeg også fået mit eget indtryk af Copyright Agent som en usympatisk, upersonlig og uforstående faktureringsfabrik.

Her er mit forløb med virksomheden:

Kapitel 1: Jeg har en blog

kukua.dk kan virksomheder i kulturbranchen komme i kontakt med studerende på Institut for Kunst- og Kulturvidenskab på Københavns Universitet. Tænk: Opslagstavle.

Det er gratis for de fattige, (men kreative), virksomheder, som fx får dygtige praktikanter uden at betale en krone for det.

De virksomheder, der har kompetencerne til det, registrerer sig og lægger selv deres opslag op.

Kapitel 2: Jeg får en mail

Den 15. april 2019 modtager jeg dette fra en studentermedhjælp hos virksomheden Copyright Agent:

Kære Kukua.dk
Vi er blevet opmærksomme på, at I sandsynligvis har krænket ophavsretten, da vi ikke kan finde belæg for anvendelsen i vores systemer. 
 
Som billedbureau ejer Ritzau Scanpix videresalgsretten til det pågældende billede som er markeret med en rød firkant i det vedhæftede dokument, der yderligere indeholder dokumentation for den krænkelse, vi mener har fundet sted. 
 
På den baggrund er Ritzau Scanpix overfor ophavsmænd, forpligtet til at søge vederlag og godtgørelse, for billeder som er publiceret uberettiget. Selv om det måske ikke har været jeres intention, så er det en krænkelse af fotografens ophavsret at publicere det uden gyldig licens eller tilladelse.
 
I det vedhæftede materiale er generel information, dokumentation, faktura og opgørelse af kompensation til rettighedshaver samt “Ofte stillede spørgsmål – og svar”.
 
Da Ritzau Scanpix oplever et stigende antal ophavsretsbrud på deres materiale ser de sig nødsaget til at finde og police deres materiale, så de også i fremtiden kan levere kvalitets materiale til deres kunder.  
 
Copyright Agent samarbejder med en række professionelle fotografer og førende billedbureaueromkring sikring af deres ophavsret på internettet.
I kan læse mere om Copyright Agent her: www.copyrightagent.dk
 
Hvis I har spørgsmål eller dokumentation til sagen, så er I meget velkomne til at besvare denne e-mail eller kontakte os telefonisk på 70 273 272 mandag – fredag fra 9:00 – 17:00.


Oplys venligst dit sagsnummer, hvis du kontakter os telefonisk, så vi har mulighed for at hjælpe dig i den konkrete sag.

Med mailen er vedhæftet en pdf-fil, der fortæller, at jeg har brudt ophavsretslovgivningen, med en faktura på 3.437,50 kr., som jeg skal betale “inden 10 dage fra dags dato“.

Her kan du se pdf-filen – blot har jeg bortcensureret det billede, Copyright Agent har sat ind for at dokumentere min påståede krænkelse af ophavsretten:

Kapitel 3: Jeg svarer

På den fine pdf fra Copyright Agent kan jeg se, at det slet ikke er mig, der har lagt billedet op. Det er en bruger fra den sympatiske kunstbiograf Posthus Teatret.

Så jeg tænker straks: Det har som sådan ikke noget med mig at gøre. Ligesom politiken.dk ikke er ansvarlig, hvis jeg kommenterer på en artikel med hele teksten fra Syv år for PET, (blot de fjerner den igen når de bliver opmærksom på ophavsretsbruddet), er jeg ikke ansvarlig, når jeg i god tro går ud fra, at mine brugere selvfølgelig har lov til at publicere de fotos, de publicerer – det er jo trods alt deres egen kreative branche, der lever af ophavsretten.

Så jeg svarer fluks:

Kære Fatima

Det er en bruger på siden, der har lagt det pågældende billede op. Alle kan selv registrere sig på siden og lægge indlæg op.

Så vidt jeg kan se, er den pågældende selv ansat på – eller har tilknytning til – Posthus Teatret. I indlægget står hendes telefonnummer, så jeg synes I skal ringe til hende og spørge. Jeg fjerner hjertens gerne indlægget og/eller fotoet fra siden, såfremt jeg modtager en tro og love-erklæring fra jer på, at I ejer ophavsretten på fotoet.

Mvh Morten

Fatima svarer:

Kære Morten
Jeg vedhæfter dokumentation for, at Ritzau Scanpix har ophavsretten til billedmaterialet.  
Vi vil kontakte Posthus Teatret. Tak for hjælpen.

Jeg sletter billedet fra kukua.dk og skriver:

Kære Fatima

Jeg har slettet billedet fra serveren.

Og Fatima svarer:

Kære Morten

Det er noteret, at billedet er fjernet, hvilket vi takker for.

Og jeg tror at alt er godt. Men det er det ikke…

Kapitel 4: Rykkeren

Den 14. maj 2019 modtager jeg en ny mail fra Fatima:

Kære Kukua.dk

Da betalingsfristen er overskredet, sender vi en lille påmindelse. Ved manglende tilbagemelding i løbet ad ugen, vil vi sende rykkere i sagerne med de oprindelige beløb. 

Der må være tale om en fejl – totalt utjekket at lave den slags fejl når Copyright Agents gesjæft er at afpresse borgere med juridisk sprog.

Jeg skriver samme dag:

Kære Fatima

Vi har ordnet sagen – og derfor regner jeg bestemt med, at du frafalder kravet.

Og jeg tror at alt er godt. Men det er det ikke…

Kapitel 5: Inkassovarsel

Den 12. juni modtager jeg denne mail fra – gæt selv – Fatima:

R2, krænkelse af ophavsretten – inkassovarsel

Kære Kukua.dk

Vi har tidligere fremsendt krav om kompensation for krænkelse af vores klients ophavsret. Vi har fortsat ikke registreret jeres betaling og fremsender hermed vedhæftede rykker i sagen.

Copyright Agent samarbejder med en række profesionelle fotografer og førende billedbureauer omkring sikring af deres ophavsret på internettet.

I kan læse mere om copyright Agent her: www.copyrightagent.dk

Hvis I har spørgsmål eller dokumentation til sagen, så er I meget velkomne til at besvare denne e-mail eller kontakte os telefonisk.

Det er ikke så tit, jeg får inkassovarsler, så her tænker jeg at Copyright Agents adfærd er stærkt ubehagelig. Jeg sender fluks hele 4 mails tilbage:

Kære Fatima

Vær sød at ringe til mig ved lejlighed på 25 80 16 54. Vi har allerede afsluttet sagen, men du bliver ved med at kontakte mig.

I øvrigt har jeg også besvaret alle dine tidligere e-mails.

Og for nu at sige det helt klart: Jeg har ikke tænkt mig at betale for en mulig krænkelse af ophavsretten, som det ikke er mig der har begået.

Kære Fatima

Vedhæftet er dokumentation for, hvem der – hvis der er foretaget en krænkelse af ophavsretten – har foretaget den, ved at uploade det pågældende billede til den server, hvor kukua.dk ligger. Du kan rette eventuelle krav til den person. 

Du bedes bekræfte, at du frafalder kravet.

Og endelig falder tiøren hos Copyright Agent, som tydeligvis benytter sig af helt inkompetente automatiseringsløsninger og stakkels, fattige studentermedhjælpere i deres hæmningsløse higen efter profit:

Kære Morten

Jeg beklager den tilsendte rykker, hvilket blev sendt ved en fejl. 
Vi tager sagen videre med Posthus Teatret. 

Jeg håber at det er endt godt for Posthus Teatret. Jeg ved stadig ikke, om de havde lov at benytte billedet af deres biograf, men at Copyright Agent går efter en fattig kulturinstitution for (efter eget udsagn) at hjælpe fattige kreative, viser tydeligt at Copyright Agent kun gør deres fejlbehæftede, inkompetente arbejde for pengenes skyld.

Kapitel 6: Hvorfor dele historien?

Så hvorfor offentliggøre min runde i managen med Copyright Agent?

Så andre, som i den lignende historie med advokatfirmaet Njord, der uberettiget sendte fakturaer på downloadede film til hvem som helst, kan læse om Copyright Agents forretningsmodel og -metoder til skræk, advarsel og måske hjælp, hvis de skulle være så uheldige at modtage en mail fra virksomheden.

1440 virksomheder overvåger dig hvis du siger ja til cookies på politiken.dk

Se de 1440 virksomheder

I dag besøgte jeg politiken.dk og blev mødt af:

Jeg besluttede mig at undersøge nærmere.

Politikens cookiepolitik gemmer sig i https://cdn.privacy-mgmt.com/consent/tcfv2/privacy-manager/privacy-manager-view?siteId=4366&vendorListId=5eeb8f57b8e05c69980ea9be&consentLanguage=DA.

Det er en json-fil på 3 MB! Jeg skrev et lille Python-program til at tygge den igennem:

import json
with open("privacy-manager-view.json", "r", encoding="utf8") as politiken:
	politiken = json.load(politiken)
	partners = []
	for vendor in politiken['vendors']:
		name = vendor['name']
		url = vendor['policyUrl']
		purposes = []
		if 'consentCategories' in vendor:
			for consent in vendor['consentCategories']:
				if consent['type'] == "IAB_PURPOSE":
					purposes.append(consent['name'])
		if 'iabSpecialPurposes' in vendor:
			for purpose in vendor['iabSpecialPurposes']:
				purposes.append(purpose)
		if 'iabFeatures' in vendor:
			for purpose in vendor['iabFeatures']:
				purposes.append(purpose)			
		if 'iabSpecialFeatures' in vendor:
			for purpose in vendor['iabSpecialFeatures']:
				purposes.append(purpose)	
		partners.append([name, url, purposes])
	partners.sort(key=lambda x:x[0].lower())
	number_of_partners = len(partners)
	linklist = "<html lang='da'><body><h1>"
	linklist += "Her er de " + str(number_of_partners) + " virksomheder, som overvåger dig, hvis du siger ja tak til alle cookies på politiken.dk (d. 11. december 2020)</h1><table>"
	for partner in partners:
		try:
			linklist += "<tr><td><a href='" + partner[1] + "'>" + partner[0] + "</a></td></tr>\n"
		except:
			linklist += "<tr><td>" + partner[0] + "</td></tr>\n"
	linklist += "</table></body></html>"
	with open("linklist.html", "wt", encoding="utf8") as fout:
		fout.write(linklist)

Og her er listen:

https://helmstedt.dk/politiken.html

Nu kan du spille kortspillet Krig – online!

https://wallnot.dk/krig/ har jeg netop offentliggjort årets julespil nummer 1: Krig!

Ej, jeg havde allerede prøvet at simulere Krig, men synes det kunne være spændende at få logikkerne til at hænge sammen med interaktivitet (dog højst begrænset) og logikker for rent faktisk at vise spillet.

Nu har jeg gjort et forsøg og jeg har kommenteret en masse i koden, så den forhåbentlig er nem at følge med i.

Her er views.py fra Django:

from django.shortcuts import render
import random			# Used to shuffle decks
import base64			# Used for obfuscation and deobfuscation functions
from math import ceil 	# Used to round up

# Create decks function - not a view
def new_deck(context):
	# Create card values and list of cards in each colour
	card_values = range(2,15)
	spades = [str(i) + "S" for i in card_values]
	clubs = [str(i) + "C" for i in card_values]
	diamonds = [str(i) + "D" for i in card_values]
	hearts = [str(i) + "H" for i in card_values]
	# Combine colours to deck
	deck = spades + clubs + diamonds + hearts
	# Shuffle deck		
	random.shuffle(deck)
	# Divide deck between two players and convert to commaseparated string
	player_a_deck = ",".join(deck[0:26])
	player_b_deck = ",".join(deck[26:52])
	# Obfuscate decks to make cheating marginally harder using the obfuscate function
	# production variable toggles this behavior because it's very time consuming to debug
	# if obfuscation is on
	production = True
	if production == True:
		player_a_deck = obfuscate(player_a_deck)
		player_b_deck = obfuscate(player_b_deck)
	# Add the two decks to context
	context['player_a_deck_form'] = player_a_deck
	context['player_b_deck_form'] = player_b_deck
	# Set index to 0 to only turn one card for first round of game
	context['index'] = 0
	return context

# Obfuscate by converting to base64 encoding - not a view
def obfuscate(deck):
	return base64.b64encode(deck.encode()).decode()

# Deobfuscate by converting from base64 encoding to string - not a view
def deobfuscate(deck):
	return base64.b64decode(deck.encode()).decode()

# Logic to create a list of which cards should be hidden or shown to player - not a view
def show_hide_cards(cards_on_table, index):
	counter = 0
	cards_on_table_show_hide = []
	for card in cards_on_table:
		# First card should always be shown
		if counter == 0:
			cards_on_table_show_hide.append([card, True])
		# If the card number is divisible by 4 it is the turn card in a war
		elif counter % 4 == 0:
			cards_on_table_show_hide.append([card, True])
		# If the card number equals the index value, one or both players does not
		# have enough cards for a full war so the last card should be turned
		elif counter == index:
			cards_on_table_show_hide.append([card, True])
		else:
			cards_on_table_show_hide.append([card, False])
		counter += 1
	return cards_on_table_show_hide

# Page view
def index(request):
	# Empty context variable to add to
	context = {}
	# Production variable to toggle obfuscation
	production = True
	# First visit, game has not been started
	if not request.method == 'POST':
		# Create a deck using the new_deck function
		new_deck(context)
	# Game has started
	else:
		### GAME PREPARATION AND CARD DISPLAY LOGIC ###
		# Current game status is used in template to know whether game has been
		# started or not, or has ended
		game_status = "Going on"
		
		# Get submitted decks from user submitted POST request
		player_a_deck = request.POST.get('player_a_deck')
		player_b_deck = request.POST.get('player_b_deck')
		
		# Deobfuscate submitted decks using the deobfuscate function
		if production == True:
			player_a_deck = deobfuscate(player_a_deck)
			player_b_deck = deobfuscate(player_b_deck)
		
		# Convert decks to lists
		player_a_deck = player_a_deck.split(",")
		player_b_deck = player_b_deck.split(",")

		# Get submitted index value in order to know which cards to compare
		# The index is used in case of war to determine which cards to compare
		# and what cards to show to player
		index = int(request.POST.get('index'))
		context['current_index'] = index
		
		# In order to display cards in correct order in case of war for player_b
		# a number of slices are prepared and added to context as strings in a list.
		# number_of_slices is rounded up in case index is not divisible by 4 (endgame logic)
		number_of_slices = ceil(index/4)	
		slices = []
		# Only needed if number of slices is above 0
		if number_of_slices:
			start = 1
			end = 5
			for slice in range(number_of_slices):
				slices.append(str(start)+":"+str(end))
				start +=4
				end += 4
		context['slices'] = slices
		
		# In order to display cards to player using a loop, the deck is sliced
		# by the index value plus 1. # If index is 0, 1 card should be shown.
		# If index is 4 because of war, 5 cards should be shown... and so on.
		a_cards_on_table = player_a_deck[:index+1]
		b_cards_on_table = player_b_deck[:index+1]
		
		# Cards on table is run through function to decide which cards to show face up/face down
		# to player and added to context.
		context['a_cards_on_table'] = show_hide_cards(a_cards_on_table, index)
		context['b_cards_on_table'] = show_hide_cards(b_cards_on_table, index)
		
		# Length of cards "on the table" is calculated in order to calculate remaining cards in player decks.
		# The value for player a is shown to the players and is also used for template card display logic.
		a_cards_on_table_length = len(a_cards_on_table)
		b_cards_on_table_length = len(b_cards_on_table)
				
		# Calculate number of cards in decks
		a_number_of_cards = len(player_a_deck)
		b_number_of_cards = len(player_b_deck)

		# Add remaining cards in deck to context to show to players
		a_remaining_in_deck = a_number_of_cards - a_cards_on_table_length
		b_remaining_in_deck = b_number_of_cards - b_cards_on_table_length
		context['a_remaining_in_deck'] = a_remaining_in_deck
		context['b_remaining_in_deck'] = b_remaining_in_deck
		
		### GAME LOGIC ###
		# Check if both players have decks large enough to compare
		if a_number_of_cards > index and b_number_of_cards > index:
			# Convert first card in decks to integer value in order to compare
			player_a_card = int(player_a_deck[index][:len(player_a_deck[index])-1])
			player_b_card = int(player_b_deck[index][:len(player_b_deck[index])-1])

			# Player a has the largest card
			if player_a_card > player_b_card:
				# Add cards in play to end of player a deck and delete them from beginning
				# of player a and player b decks
				player_a_deck.extend(player_a_deck[:index+1])
				player_a_deck.extend(player_b_deck[:index+1])	
				del player_a_deck[:index+1]
				del player_b_deck[:index+1]
				# If a play is decided, index is set to 0
				index = 0
				context['message'] = "Du vandt runden!"
			# Player b has the largest card
			elif player_a_card < player_b_card:
				# Cards are added to deck in different order from player a to deck in order
				# to avoid game risk of going on forever
				player_b_deck.extend(player_b_deck[:index+1])	
				player_b_deck.extend(player_a_deck[:index+1])
				del player_a_deck[:index+1]
				del player_b_deck[:index+1]
				# If a play is decided, index is set to 0
				index = 0
				context['message'] = "Du tabte runden!"
			# Cards must be equal and war is on
			else:
				# In case of war normally four cards are added to the index, but
				# In order to accomodate a case of end-game war, there are special cases
				# if either player doesn't quite have enough cards for a full 4-card-turn war
				if a_number_of_cards >= index + 4 <= b_number_of_cards:
					index += 4
				# Since the if statement two levels up already checks that number of cards is larger
				# than the index value, an else with no criteria is enough to decide how many cards
				# each player has left to turn and add the smallest number to the index
				else:
					# Calculate the difference between number of cards and index for each player.
					# The smallest of the two differences is added to index to decide how many cards to use for war.
					# One is subtracted for the card already on the table
					a_difference = a_number_of_cards - index
					b_difference = b_number_of_cards - index
					index += min(a_difference, b_difference) - 1
					# Edge case: If war on last remaining card for either player, 1 is added to index to end the game
					# by getting the index above the number of cards in the deck of the player(s) with no cards left
					if a_remaining_in_deck == 0 or b_remaining_in_deck == 0:
						index += 1
				# Messages are different for single, double, trippel wars and anything above.
				# Since the index can be upped by less than four, less than or equal is used to
				# decide which kind of war is on.
				if index <= 4:
					context['message'] = "Krig!"
				elif index <= 8:
					context['message'] = "Dobbeltkrig!"
				elif index <= 12:
					context['message'] = "Trippelkrig!"
				else:
					context['message'] = "Multikrig!"
		
		### AFTER GAME LOGIC AND DECIDE GAME LOGIC ###
		# Calculate length of decks after game logic has run
		player_a_deck_length = len(player_a_deck)
		player_b_deck_length = len(player_b_deck)
		
		# Compare lengths of decks to decide if someone has won. The number of cards on table for
		# next turn of cards is always at least one more than the index (index 0, 1 card, index 4,
		# 5 cards). There are three possible outcomes:
		# 1) Equal game: Both players are unable to turn and have equal sized decks (very, very rare!)
		# 2) Player a is unable to play and has a smaller deck than b (if both players are unable to turn, largest deck wins)
		# 3) Same as 2) for player b
		if player_a_deck_length <= index and player_b_deck_length <= index and player_a_deck_length == player_b_deck_length:
			context['message'] = "Spillet blev uafgjort. Hvor tit sker det lige?"
			game_status = "Over"
		elif player_a_deck_length <= index and player_a_deck_length < player_b_deck_length:
			context['message'] = "Du tabte spillet!"
			game_status = "Over"			
		elif player_b_deck_length <= index and player_b_deck_length < player_a_deck_length:
			context['message'] = "Du vandt spillet!"	
			game_status = "Over"			

		# Add size of decks after play to context to decide whether to show decks to player
		context['after_deck_a'] = player_a_deck
		context['after_deck_b'] = player_b_deck
		
		# Add game status to context
		context['game_status'] = game_status
		
		# Convert decks back to strings
		player_a_deck = ",".join(player_a_deck)
		player_b_deck = ",".join(player_b_deck)
		
		# Obfuscate decks using obfuscate function
		if production == True:		
			player_a_deck = obfuscate(player_a_deck)
			player_b_deck = obfuscate(player_b_deck)
		
		# Context for form
		context['player_a_deck_form'] = player_a_deck
		context['player_b_deck_form'] = player_b_deck
		context['index'] = index
		
		# If game is over, create a new deck to add to form for new game
		if game_status == "Over":
			new_deck(context)
	return render(request, 'krig/index.html', context)

Og her er skabelonen index.html:

{% load static %}
{% spaceless %}
<!doctype html>
<html lang="da">
	<head>
		<title>Krig!</title>
		<meta name="description" content="Spil det populære, vanedannende kortspil krig mod computeren - online!">
		<meta charset="utf-8">
		<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
		<link rel="stylesheet" href="{% static "krig/style.css" %}">
		<link rel="apple-touch-icon" sizes="180x180" href="/apple-touch-icon.png">
		<link rel="icon" type="image/png" sizes="32x32" href="/favicon-32x32.png">
		<link rel="icon" type="image/png" sizes="16x16" href="/favicon-16x16.png">
		<link rel="manifest" href="/site.webmanifest">
		<link rel="mask-icon" href="/safari-pinned-tab.svg" color="#5bbad5">
		<meta name="msapplication-TileColor" content="#ffc40d">
		<meta name="theme-color" content="#ffffff">
		{% comment %}Most of stylesheet is loaded externally, but logic to size images in case of war is kept in template{% endcomment %}
		{% if current_index > 0 %}
		<style>
			img {
				width: 22%;
				display: inline;
			}
		</style>
		{% endif %}
	</head>
	<body>
		<h1>Krig</h1>
		
		{% comment %}Status message of current round or game is displayed{% endcomment %}
		<p class="status">
			{{ message }}
		</p>
		
		{% comment %}Page is divided in two-column grid. Each column is aligned towards vertical center of page{% endcomment %}
		<div class="grid">
			{% comment %}Player a ("You") column{% endcomment %}
			<div class="item text-right">
				<p>Dig</p>

				{% comment %}If any cards are left to turn, show number, if no cards are left, write no cards left{% endcomment %}
				<p class="cardsleft">
					{% if a_remaining_in_deck > 0 %}
						{{ a_remaining_in_deck }} kort tilbage i bunken
					{% elif a_remaining_in_deck == 0 %}
						Ingen kort tilbage!
					{% endif %}
				</p>

				{% comment %}Back of card (deck) is shown if cards are left in deck or game has not begun{% endcomment %}
				{% if a_remaining_in_deck > 0 or not game_status %}
					<img src="{% static 'krig/back_r.svg' %}">
				{% endif %}

				{% comment %}Loop to show player's turned cards.{% endcomment %}
				{% for card in a_cards_on_table %}
					{% if card.1 == True %}
						<img src="{% static 'krig/'|add:card.0|add:'.svg' %}"><br>
					{% else %}
						<img src="{% static 'krig/back_r.svg' %}">
					{% endif %}
				{% endfor %}
			</div>

			{% comment %}Player b ("Computer") column{% endcomment %}
			<div class="item text-left">
				<p>Computeren</p>
				{% comment %}If any cards are left to turn, show number, if no cards are left, write no cards left{% endcomment %}
				<p class="cardsleft">
					{% if b_remaining_in_deck > 0 %}
						{{ b_remaining_in_deck }} kort tilbage i bunken
					{% elif b_remaining_in_deck == 0 %}
						Ingen kort tilbage!
					{% endif %}
				</p>

				{% comment %}
					The order of the deck and the first turned card is different for player b who plays on the right side.
					Therefore if there is a first card in player b's cards on table that card is shown.
				{% endcomment %}
				{% if b_cards_on_table.0 %}
					<img src="{% static 'krig/'|add:b_cards_on_table.0.0|add:'.svg' %}">
				{% endif %}

				{% comment %}If b has cards left in deck or game has not started, show back of deck{% endcomment %}
				{% if b_remaining_in_deck > 0 or not game_status %}
						<img src="{% static 'krig/back_r.svg' %}">
				{% endif %}
				<br>
				
				{% comment %}
					Due to the order of player b's shown cards being different than for player a, this loop to show cards
					in case of war is a little different from player a's.
					The slices variable contains pairs of values saved as strings that the Django template filter |slice can
					understand, e.g. "1:5". These are looped through so that only parts of b_cards_on_table corresponding to
					the slice is looped through for each single, double, etc. war. The loop through b_cards_on_table is reversed
					because the card being turned is shown left of the hidden cards in the war.
				{% endcomment %}
				{% for slice_cut in slices %}
					{% for card in b_cards_on_table|slice:slice_cut reversed %}
						{% if card.1 == True %}
							<img src="{% static 'krig/'|add:card.0|add:'.svg' %}">
						{% else %}
							<img src="{% static 'krig/back_r.svg' %}">
						{% endif %}
					{% endfor %}<br>
				{% endfor %}
			</div>
		</div>

		{% comment %}
			This form is used for user input with the text in the button depending on whether user is on:
			1) Starting page: User can start a game
			2) In an ongoing game: User can turn next card
			3) In a game that has ended: User can start a new game
		{% endcomment %}
		<form class="next" action="{% url 'krig_index' %}" method="post">
			{% csrf_token %}
			<input name="player_a_deck" type="hidden" value="{{ player_a_deck_form }}">
			<input name="player_b_deck" type="hidden" value="{{ player_b_deck_form }}">
			<input name="index" type="hidden" value="{{ index }}">
			<button type="submit">{% if not game_status %}Start spillet{% elif game_status == "Going on" %}Vend næste kort{% elif game_status == "Over" %}Start nyt spil{% endif %}</button>
		</form>
	</body>
</html>
{% endspaceless %}

God fornøjelse!