Books I read, or: Python and Django lets me realise my nerdiest dreams

I like to document my doings and for about 15 years I’ve been documenting the books I have read. First in Notepad, then in Excel and finally in Python and Django with a database somewhere in the background. I am amazed what experts help amateurs achieve.

Take a look at what I made

This post explains the proces of collecting data about my reads in little detail and in too great detail the code behind the page.

Some books of 2020

Finding information ONLINE

Most data was crawled from Danish library ressources, Goodreads and Wikpedia with varying success. A lot was entered manually, especially with works in translation. I spent hours and hours being pedantic.

Even though librarians have been managing data longer than anyone else on the planet, there is no autoritative relational database where you can look up when some book by some author was first published and when the first Danish language version came out. In defence of librarians, many writers go to great lengths to make data management on books hard (one example is the genre “non-fiction novel” used by Spanish writer Javier Cercas).

The mysteries of Goodreads

I was mystified by the ability of Goodreads to place study guides and commentary to great works of literature first in their search results (and many more strange things) and terrified by Google displaying available nowhere else I could find on the web author birthdays on top of search results .

Also, Goodreads magically has editions of books that are older than when Goodreads claims the book was first published.

Goodreads: When what you’re searching for is nowhere near the first hit
How does this autocomplete work?

I wonder?

First published on April 5, but first listed edition is from March 23. Huh?

Adding books

After crawling for data, I made a form to add new books:

Step 1. Push “Look up”

The form

This was a breeze in Django. Here’s

from django.forms import ModelForm
from books.models import Author, Title, Read

class AuthorForm(ModelForm):
	class Meta:
		model = Author
		fields = ['first_name', 'last_name','gender','country','biography','birth_date','data_quality']
class TitleForm(ModelForm):
	class Meta:
		model = Title
		fields = ['title','genre','read_language','original_language','publisher','isbn','published_date','first_published','cover_url','ereolen_url','biblo_dk_url','good_reads_url','pages','original_title']	
class ReadForm(ModelForm):
	class Meta:
		model = Read
		fields = ['date']	

The view:

And here’s the logic from (I probably shouldn’t uncritically be saving cover URLs found on the internet to my server, but):

# Add a read to database
def add_read(request):	
	book_saved = False
	author_form = AuthorForm()
	title_form = TitleForm()
	read_form = ReadForm()
	if request.method == 'POST':	# AND SUBMIT BUTTON
		author_form = AuthorForm(request.POST)
		title_form = TitleForm(request.POST)
		read_form = ReadForm(request.POST)
		if author_form.is_valid() and title_form.is_valid() and read_form.is_valid():
			author_data = author_form.cleaned_data
			title_data = title_form.cleaned_data
			read_data = read_form.cleaned_data

			existing_author = False
			existing_title = False
			# Check if already exist
				author = Author.objects.get(first_name=author_data['first_name'], last_name=author_data['last_name'])
				existing_author = True
				context['existing_author'] = existing_author
				if 'lookup' in request.POST:
					if any(not value for value in author_data.values()):
						author_data, title_data = get_author(author_data, title_data)	# try to fetch data
			# Check if title already exists, will only work is author has been found. (Book is re-read)
				if author:
					title = Title.objects.get(authors=author, title=title_data['title'])
					existing_title = True
					context['existing_title'] = True
				if 'lookup' in request.POST:
					if any(not value for value in title_data.values()):
						title_data, author_data = get_title(title_data, author_data)	# try to fetch data
			# Render form with data from database or collected data
			if 'lookup' in request.POST:
				if not existing_author:
					author_form = AuthorForm(author_data)
					author_form = AuthorForm(instance=author)
				if not existing_title:
					title_form = TitleForm(title_data)
					title_form = TitleForm(instance=title)
			# Save data
			if 'save' in request.POST:
				if not existing_author:
					author =
				if not existing_title:	
					title =
					if title.cover_url:
						file = requests.get(title.cover_url, stream=True)
						save_location = settings.STATIC_ROOT + "books/covers/"

						if '.jpg' in title.cover_url:
							ending = '.jpg'
						elif '.png' in title.cover_url:
							ending = '.png'
						elif '.webp' in title.cover_url:
							ending = '.webp'
							ending = '.jpg'
						id =
						filename = str(id) + ending
						with open(save_location+filename, 'wb') as f:
							file.raw.decode_content = True
							shutil.copyfileobj(file.raw, f)
						title.cover_filename = filename
						#create thumbnail							
						image ="RGB")
						maxsize = 150, 150
						image.thumbnail(maxsize, Image.ANTIALIAS)"150/"+str(id)+".webp", "WEBP")

				save_read =
				save_read.title = title
				save_read =
				# Set save variable to True and display empty form
				book_saved = True
				author_form = AuthorForm()
				title_form = TitleForm()
				read_form = ReadForm()				
	context = {'author_form': author_form, 'title_form': title_form, 'read_form': read_form, 'book_saved': book_saved}
	return render(request, 'books/add.html', context)

The helper function

If you are a really curious and patient individual, you may be wondering about the get_author and get_title functions. You are in luck! Here is most of which helps me scrape some data from the internet and will probably break in the future:

def numbers_in_string(string):
	numbers = sum(character.isdigit() for character in string)
	return numbers

def get_author(author_data, title_data):
	if not author_data['biography']:
		if not author_data['country'] == 'da':
			url = '' + author_data['first_name'] + " " + author_data['last_name'] + '%22&title=Special:Search&profile=advanced&fulltext=1&ns0=1'
			url = '' + author_data['first_name'] + " " + author_data['last_name'] + '%22&title=Special:Search&profile=advanced&fulltext=1&ns0=1'
		url = author_data['biography']
	author_request = requests.get(url)

	if author_request.status_code == 200:
		soup = BeautifulSoup(author_request.text, "lxml")
			first_result = soup.find('div', {'class':'mw-search-result-heading'}).a['href']
			if not author_data['country'] == 'da':
				result_page = '' + first_result
				result_page = '' + first_result
			page_request = requests.get(result_page)
			soup = BeautifulSoup(page_request.text, "lxml")
			# If not provided, set biography
			if not author_data['biography']:
				author_data['biography'] = result_page
			# If not provided, try to get birth_date
			if not author_data['birth_date']:
					birthday = soup.find('span', {'class':'bday'}).string
					author_data['birth_date'] = datetime.strptime(birthday, '%Y-%m-%d')
						birthday = soup.find('th', text="Født").parent.get_text()
						# sometimes the above doesn't return a space between year and next info causing a fuckup
							find_year ="\d\d\d\d\S", birthday).span()[1]
							birthday = birthday[:find_year-1] + " " + birthday[find_year+-1:]
						# sometimes even more fuckery
							letters_and_numbers_together ="[a-zA-Z]\d", birthday).span()[1]
							birthday = birthday[:letters_and_numbers_together-1] + " " + birthday[letters_and_numbers_together-1:]
						birthday_date = search_dates(birthday,languages=['da'])[0][1]
						author_data['birth_date'] = birthday_date
						paragraphs = soup.find_all('p')
						for paragraph in paragraphs:
							text = paragraph.get_text()
							if '(født' in text:
								birth_mention = text.find('(født')
								birth_string = text[birth_mention+1:text.find(")",birth_mention)]
								if len(birth_string) < 10:	# just a year, probably
									year = int(birth_string[5:10])
									birthday = date(year,1,1)
									author_data['birth_date'] = birthday
									birthday_date = search_dates(birth_string,languages=['da'])[0][1]
									author_data['birth_date'] = birthday_date
			# If not provided, try to get country
			if not author_data['country']:
					birthplace = soup.find('div', {'class':'birthplace'}).get_text()
						birthplace = soup.find('th', text="Born").parent.get_text()
				if birthplace:
					country = get_country(birthplace)
					if not country:
							birthplace = soup.find('th', text="Nationality").find_next_sibling().string
							country = get_country(birthplace)
				if country:
					author_data['country'] = country
					if not title_data['original_language']:
						if country == 'us' or country == 'sc' or contry == 'ir' or country == 'en' or country == 'au':
							country = 'en'
						title_data['original_language'] = country
	if not author_data['gender']:
		request = requests.get('' + author_data['first_name'] + '&key=vCjPrydWvlRcMxGszD')
		response = request.json()
		if response['gender'] == 'male':
			author_data['gender'] = 'm'
		elif response['gender'] == 'female':
			author_data['gender'] = 'f'
	if not author_data['data_quality']:
		if author_data['first_name'] and author_data['last_name'] and author_data['gender'] and author_data['country'] and author_data['birth_date'] and author_data['biography']:
			author_data['data_quality'] = 'med'
			author_data['data_quality'] = 'bad'
	if not author_data['biography'] and author_data['first_name'] and title_data['read_language'] == 'da':
		url = '' + author_data['last_name'][0].lower() + '.htm'
		request = requests.get(url)
		soup = BeautifulSoup(request.text, "lxml")
		links = soup.find_all('a', href=True)
		for link in links:
			if len(link['href']) > 7:
				text = link.get_text().lower()
				if author_data['last_name'].lower() + ", " + author_data['first_name'].lower() == text:
					url = '' + link['href']
					request = requests.get(url)
					soup = BeautifulSoup(request.text, "lxml")
					author_data['biography'] = request.url
					if not author_data['country']:
						author_data['country'] = 'da'
					if not author_data['birth_date']:
						born = soup.find(text=re.compile('Født'))
						if born:
							birthday_date = search_dates(born,languages=['da'])[0][1]
							author_data['birth_date'] = birthday_date
							born = soup.find(text=re.compile('f. '))
							birth_year = int("\d\d\d\d", born).group())
							author_data['birth_date'] = date(birth_year,1,1)
					if not title_data['original_language']:
						title_data['original_language'] = 'da'
	return author_data, title_data

def get_ereolen(title_data, author_data):
	soup = ""
	if not title_data['ereolen_url']:
		if title_data['isbn']:
			url = '' + title_data['isbn'] + '?&facets[]=facet.type%3Aebog'
			url = '' + author_data['first_name'] + " " + author_data['last_name']+ " " + title_data['title'] + '?&facets[]=facet.type%3Aebog'
		request = requests.get(url)
			search_soup = BeautifulSoup(request.text, "lxml")
			links = [a['href'] for a in search_soup.find_all('a', href=True) if '/collection/' in a['href']]
			book_request = requests.get('' + links[0])
			soup = BeautifulSoup(book_request.text, "lxml")
			links = [a['href'] for a in soup.find_all('a', href=True) if '/object/' in a['href']]
			# ebooks and audiobook versions
			if len(links) == 4:
				book_request = requests.get('' + links[0])
				soup = BeautifulSoup(book_request.text, "lxml")
			title_data['ereolen_url'] = '' + links[0]
		book_request = title_data['ereolen_url']
		book_request = requests.get(book_request)
		soup = BeautifulSoup(book_request.text, "lxml")

	if soup:
		if not title_data['published_date']:
				published = soup.find('div', class_={"field-name-ting-author"}).get_text()
				published = int("[(]\d\d\d\d[)]", published).group()[1:5])
				title_data['published_date'] = date(published,1,1)
		if not title_data['isbn']:
				isbn_tag = soup.find('div', class_={"field-name-ting-details-isbn"}) 
				title_data['isbn'] = isbn_tag.find('div', class_={"field-items"}).get_text()
		if not title_data['publisher']:
				publisher_tag = soup.find('div', class_={"field-name-ting-details-publisher"}) 
				title_data['publisher'] = publisher_tag.find('div', class_={"field-items"}).get_text()
		if not title_data['pages']:
				page_tag = soup.find('div', class_={"field-name-ting-details-extent"}) 
				title_data['pages'] = int(page_tag.find('div', class_={"field-items"}).get_text().replace(" sider",""))
		if not title_data['original_title']:
				original_title_tag = soup.find('div', class_={"field-name-ting-details-source"}) 
				title_data['original_title'] = original_title_tag.find('div', class_={"field-items"}).get_text()
		if not title_data['cover_url']:
			covers = [img['src'] for img in soup.find_all('img') if '/covers/' in img['src']]
			title_data['cover_url'] = covers[0][:covers[0].find("?")]
	return title_data, author_data

def get_bibliotek_dk(title_data, author_data):
	search_url = '' + author_data['first_name'] + " " + author_data['last_name'] + '%22+and+phrase.title%3D%22' + title_data['title'] + '%22&select_material_type=bibdk_frontpage&op=S%C3%B8g&n%2Famaterialetype%5Bterm.workType%253D%2522literature%2522%5D=term.workType%253D%2522literature%2522&year_op=%2522year_eq%2522&year_value=&form_id=search_block_form&sort=rank_main_title&page_id=bibdk_frontpage'
	request = requests.get(search_url)
	soup = BeautifulSoup(request.text, "lxml")
	hits = soup.find_all('div', {'class':'work mobile-page'})
	if not hits:
		url = '' + author_data['first_name'] + " " + author_data['last_name'] + " " + title_data['title'] +'&select_material_type=bibdk_frontpage%2Fbog&op=S%C3%B8g&n%2Famaterialetype%5Bterm.workType%253D%2522literature%2522%5D=term.workType%253D%2522literature%2522&year_op=%2522year_eq%2522&year_value=&form_build_id=form-TQ8TlT3HGFiKXyvz6cCFaiuTMZKimuHMF-p4q1Mb8ZI&form_id=search_block_form&sort=rank_main_title&page_id=bibdk_frontpage#content'
		request = requests.get(url)
		soup = BeautifulSoup(request.text, "lxml")
		hits = soup.find_all('div', {'class':'work mobile-page'})
	for hit in hits:
		id = hit['id']
		title = hit.find('h2', {'class':'searchresult-work-title'}).get_text()
		author = hit.h3.get_text()
		if title_data['title'].lower() in title.lower() or title.lower() in title_data['title'].lower() or len(hits) == 1:
			if 'basis' in id:
				link = id.replace("basis","-basis:")
			elif 'katalog' in id:
				link = id.replace("katalog","-katalog:")
			biblo_url = '' + link
			request = requests.get(biblo_url)
			if not title_data['biblo_dk_url']:
				title_data['biblo_dk_url'] = biblo_url
			soup = BeautifulSoup(request.text, "lxml")
			if not title_data['cover_url']:
					img = soup.find('div', {'class':'bibdk-cover'}).img['src'].replace("/medium/","/large/")
					img = img[:img.find("?")]
					title_data['cover_url'] = img
			book_data = soup.find('div', {'class':'manifestation-data'})
			if not title_data['pages']:
					pages = book_data.find('div', {'class':'field-name-bibdk-mani-format'}).find('span', {'class':'openformat-field'}).string.strip()
					pages = pages[:pages.find(" ")]
					pages = int(pages)
					title_data['pages'] = pages
			if not title_data['publisher']:
					publisher = book_data.find('div', {'class':'field-name-bibdk-mani-publisher'}).find('span', {'property':'name'}).string
					title_data['publisher'] = publisher
			if not title_data['published_date'] or not title_data['first_published']:
					first_published = book_data.find('div', {'class':'field-name-bibdk-mani-originals'}).find('span', {'class':'openformat-field'}).string.strip()
					published = int("\d\d\d\d", first_published).group())
					if not title_data['published_date']:
						title_data['published_date'] = date(published,1,1)
					if not title_data['first_published'] and title_data['read_language'] == 'da' and title_data['original_language'] == 'da':
						title_data['first_published'] = date(published,1,1)
						pub_year = int(book_data.find('div', {'class':'field-name-bibdk-mani-pub-year'}).find('span', {'class':'openformat-field'}).string.strip())
						title_data['published_date'] = date(pub_year,1,1)
						if title_data['read_language'] == 'da' and title_data['original_language'] == 'da':
								edition = book_data.find('div', {'class':'field-name-bibdk-mani-edition'}).find('span', {'class':'openformat-field'}).string.strip()
								if edition == "1. udgave":
									title_data['first_published'] = date(pub_year,1,1)
	return title_data, author_data

def get_goodreads(title_data, author_data):
	if not title_data['good_reads_url']:
		searchterm = author_data['first_name'] + " " + author_data['last_name'] + " " + title_data['title']
		search_url = '✓&q=' + searchterm + '&search_type=books'
		response = requests.get(search_url)
		search_soup = BeautifulSoup(response.text, "lxml")
		all_results = search_soup.find_all('tr', {'itemtype':''})
		if not all_results:
			search_url = '✓&q=' + title_data['title'] + '&search_type=books'
			response = requests.get(search_url)
			search_soup = BeautifulSoup(response.text, "lxml")
			all_results = search_soup.find_all('tr', {'itemtype':''})
		if all_results:
			good_match = False
			#exact match
			for result in all_results:
				gr_author = result.find('span', {'itemprop':'author'}).get_text().strip()
				gr_author = gr_author.replace(' (Goodreads Author)','')
				if "   " in gr_author:
					gr_author = gr_author.replace("   "," ")
				elif "  " in gr_author:
					gr_author = gr_author.replace("  "," ")
				gr_title = result.find('a', {'class':'bookTitle'})
				gr_title_string = gr_title.get_text().strip()
				title_url = gr_title['href']
				if gr_title_string.lower() == title_data['title'].lower() and gr_author.lower() == author_data['first_name'].lower() + " " + author_data['last_name'].lower():
					good_match = True
			if good_match == True:
				url = '' + title_url
				response = requests.get(url)
				soup = BeautifulSoup(response.text, "lxml")
				links = search_soup.find_all('a', href=True)
				books = [a['href'] for a in links if '/book/show/' in a['href']]
				for book in books:
					if not 'summary' in book and not 'analysis' in book and not 'lesson-plan' in book and not 'sidekick' in book and not 'teaching-with' in book and not 'study-guide' in book and not 'quicklet' in book and not 'lit-crit' in book and not author_data['last_name'].lower() in book:
						url = '' + book
						response = requests.get(url)
						soup = BeautifulSoup(response.text, "lxml")
						heading = soup.find('h1', {'id': 'bookTitle'}).string
		url = title_data['good_reads_url']
		response = requests.get(url)
		soup = BeautifulSoup(response.text, "lxml")

	if not title_data['good_reads_url']:
		if '?' in url:
			url = url[:url.rfind("?")]
		title_data['good_reads_url'] = url

	if not title_data['cover_url']:
			title_data['cover_url'] = soup.find('img', {"id" : "coverImage"})['src'].replace("compressed.","")
	details = soup.find('div', {"id" : "details"})
	details_text = details.get_text()
	if not title_data['published_date']:
		possible_dates = details.find_all('div', attrs={'class':'row'})
		for item in possible_dates:
			published_date = item.find(text=re.compile("Published"))
			if published_date:
				published_date = published_date.strip()
				numbers = numbers_in_string(published_date)
				if numbers > 4:
					title_data['published_date'] = search_dates(published_date,languages=['en'])[0][1]
				elif numbers == 4:
					year = int("\d\d\d\d", published_date).group())
					title_data['published_date'] = date(year,1,1)
	if not title_data['first_published']:
			first_published = details.find('nobr').string.strip()
			numbers = numbers_in_string(first_published)
			if numbers > 4:
				title_data['first_published'] = search_dates(first_published,languages=['en'])[0][1]
			elif numbers == 4:
				year = int("\d\d\d\d", first_published).group())
				title_data['first_published'] = date(year,1,1)			
	if not title_data['pages']:
			pages = details.find('span', {'itemprop': 'numberOfPages'}).string
			title_data['pages'] = int(pages[:pages.find(" ")])

	if not title_data['publisher']:
			by_location = details_text.find("by ")
			title_data['publisher'] = details_text[by_location+3:details_text.find("\n", by_location)]
	if not title_data['isbn']:
			isbn ="\d\d\d\d\d\d\d\d\d\d\d\d\d", details_text).group()
			title_data['isbn'] = isbn
				isbn ="\d\d\d\d\d\d\d\d\d\d", details_text).group()
				title_data['isbn'] = isbn

	if not title_data['original_title'] and title_data['read_language'] != title_data['original_language']:
			parent = details.find('div', text="Original Title").parent
			original_title = parent.find('div', {'class':'infoBoxRowItem'}).string
			title_data['original_title'] = original_title

	return title_data, author_data
def get_title(title_data, author_data):
	if title_data['read_language'] == 'da':
		title_data, author_data = get_ereolen(title_data, author_data)
		title_data, author_data = get_bibliotek_dk(title_data, author_data)
		title_data, author_data = get_goodreads(title_data, author_data)
		#cover from ereolen, mofibo, saxo
		# danish library request
		title_data, author_data = get_goodreads(title_data, author_data)
	return title_data, author_data

The template

The simplicity:

<h1>Add book</h1>

{% if book_saved %}
	<p>Bogen blev gemt!</p>
{% endif %}	

<form method="post">
<p class="center"><input class="button blue" name="lookup" type="submit" value="Look up">
<input class="button green" name="save" type="submit" value="Save"></p>

<p class="center">
{% if author_form.biography.value %}
	<a href="{{ author_form.biography.value }}">biografi</a>
{% endif %}

{% if title_form.good_reads_url.value %}
	<a href="{{ title_form.good_reads_url.value }}">goodreads</a>
{% endif %}

{% if title_form.ereolen_url.value %}
	<a href="{{ title_form.ereolen_url.value }}">ereolen</a>
{% endif %}

{% if title_form.biblo_dk_url.value %}
	<a href="{{ title_form.biblo_dk_url.value }}"></a>
{% endif %}

{% csrf_token %}
<div class="grid addbook">
		{{ author_form }}
		{{ title_form }}
		{{ read_form }}
		{% if title_form.cover_url.value %}
		<img class="cover" src="{{ title_form.cover_url.value }}">
		{% endif %}

The data model

Here’s with the embarrassing list of countries and languages (that I should have gotten from somewhere else) edited out:

from isbn_field import ISBNField

class Author(models.Model):
		('f', 'Female'),
		('m', 'Male'),
		('o', 'Other'),

		('good', 'Good'),
		('bad', 'Bad'),
		('med', 'Medium'),

	first_name = models.CharField('First name', max_length=500, blank=True)
	last_name = models.CharField('Last name', max_length=500)
	def __str__(self):
		return self.first_name + " " + self.last_name
	def get_titles(self):
		return " & ".join([t.title for t in self.title_set.all()])
	gender = models.CharField('Gender', choices=GENDER_CHOICES, max_length=1, blank=True)
	birth_date = models.DateField(null=True, blank=True)
	country = models.CharField('Country', choices=COUNTRY_CHOICES, max_length=2, blank=True)
	biography = models.URLField('Biography url', max_length=500, blank=True) 
	data_quality = models.CharField('Datakvalitet', choices=DATA_QUALITY_CHOICES, max_length=4, blank=True)
	class Meta:
		ordering = ['last_name']
class Title(models.Model):
		('nf', 'Non-Fiction'),
		('fi', 'Fiction'),

	authors = models.ManyToManyField(Author)
	def get_authors(self):
		return " & ".join([t.first_name + " " + t.last_name for t in self.authors.all()])
	get_authors.short_description = "Author(s)"	
	title = models.CharField('Title', max_length=500)
	def __str__(self):
		return self.title
	read_language = models.CharField('Read in language', choices=LANGUAGE_CHOICES, max_length=2)
	original_language = models.CharField('Original language', choices=LANGUAGE_CHOICES, max_length=2, blank=True)
	original_title = models.CharField('Original title', max_length=500, blank=True)
	genre = models.CharField('Overall genre', choices=GENRE_CHOICES, max_length=2)
	publisher = models.CharField('Publisher', max_length=100, blank=True)
	first_published = models.DateField(null=True, blank=True)
	published_date = models.DateField(null=True, blank=True)
	isbn = ISBNField(null=True, blank=True)
	cover_filename = models.CharField('Cover filename', max_length=100, blank=True)
	cover_url = models.URLField('Cover-url', max_length=500, blank=True)
	pages = models.PositiveIntegerField(blank=True, null=True)
	good_reads_url = models.URLField('Goodreads-url', max_length=500, blank=True)
	ereolen_url = models.URLField('Ereolen-url', max_length=500, blank=True)
	biblo_dk_url = models.URLField('Biblo-url', max_length=500, blank=True)

	class Meta:
		ordering = ['title']

class Read(models.Model):
	title = models.ForeignKey(Title, on_delete=models.CASCADE)
	date = models.DateField()
	sort_order = models.PositiveIntegerField(blank=True, null=True)

The front page

The function for the front page is short and sweet:

def index(request):
	context = {}
	context['request'] = request
	reads = Read.objects.order_by('-date__year', 'date__month','sort_order','id').select_related('title')
	context['reads'] = reads
	context['months'] = [[i, calendar.month_abbr[i]] for i in range(1,13)]
	return render(request, 'books/index.html', context)

And, while longer, I think the template loop is nice too, (although there is that clumsy nested loop):

{% regroup reads by date.year as years_list %}

{% for year, readings in years_list %}
	<h2>{{ year }}</h2>
	{% if year == 2015 %}
		<p>I was on paternity leave most of this year which gave me time to read a lot, but not the mental surplus to register by month. This year I bought a Kindle which re-kindled (durr) my interest in reading.</p>
	{% elif year == 2004 %}
		<p>I was working in England from around September 2003 to February 2004. This gave me time to read a lot, but not the computer access at home necessary to register my reads precisely.</p>
	{% elif year == 2003 %}
		<p>The year I began registering my reads.</p>
	{% elif year == 2002 %}	
		<p>This - and all years before - is from memory in 2003, so not really precise.</p>
	{% endif %}
	{% regroup readings by date.month as months_list %}
	{% if year > 2004 and not year == 2015 %}
		<div class="grid reads">
			{% for month in months %}
				<div class="flex">
					<div>{{ month.1 }}</div>
					{% for mon, reads in months_list %}
						{% if mon == month.0 %}
							{% for read in reads %}
								<a title="{{ read.title }}" href="{% url 'books_book' %}"><img class="frontcover" loading="lazy" src="{% static 'books/covers/150/' %}{{ }}.webp"></a>
							{% endfor %}
						{% endif %}
					{% endfor %}
			{% endfor %}
	{% else %}
		{% for read in readings %}
			<a href="{% url 'books_book' %}"><img class="frontcover" loading="lazy" src="{% static 'books/covers/150/' %}{{ }}.webp"></a>
		{% endfor %}
	{% endif %}

The statistics page

The charts on the statistics page are made with Chart.js which is so easy that you don’t even need to know Javascript.

Here’s the function which could probably be sped up if I had any idea how (which I don’t):

def statistics(request):
	context = {}
	# All reads, used for lots of charts
	reads = Read.objects.order_by('date__year').select_related('title').prefetch_related('title__authors')
	context['reads'] = reads
	# Books per year chart queryset
	books_pages_per_year = Read.objects.values('date__year').annotate(Count('id'), Sum('title__pages'), Avg('title__pages')).order_by('date__year')
	context['books_pages_per_year'] = books_pages_per_year
	# Prepare year, value-dictionaries
	genre_structure = {}	# fiction vs. non-fiction
	author_gender_structure = {}	# male vs. female
	author_birth_structure = {}	# median age of authors
	read_language_structure = {} # language of read
	original_language_structure = {} # original language of read
	language_choices = dict(Title.LANGUAGE_CHOICES)	# look up dict for original languages
	author_country_structure = {} # country of author
	country_choices = dict(Author.COUNTRY_CHOICES)
	book_age_structure = {} # median age of books

	for read in reads:
		year_of_read =
		# Put year keys in dictionaries
		if not year_of_read in genre_structure:	# check one = check all
			genre_structure[year_of_read] = []
			author_gender_structure[year_of_read] = []
			author_birth_structure[year_of_read] = []
			read_language_structure[year_of_read] = []
			original_language_structure[year_of_read] = []
			author_country_structure[year_of_read] = []
			book_age_structure[year_of_read] = []
		# Put values in dictionaries
		if read.title.read_language == 'da' or read.title.read_language == 'en':
		if read.title.original_language:
		if read.title.genre:
		if read.title.first_published:
		for author in read.title.authors.all():
			if author.gender:
			if author.birth_date:
	# Prepare datasets for charts
	genres = {}
	for year, genre_list in genre_structure.items():
		number_of_titles = len(genre_list)
		number_of_fiction_titles = sum(1 for genre in genre_list if genre == 'fi')
		fiction_percentage = int(number_of_fiction_titles/number_of_titles*100)
		non_fiction_percentage = 100 - fiction_percentage
		genres[year] = [fiction_percentage, non_fiction_percentage]
	context['genres'] = genres
	median_author_age = {}
	for year, birthyears in author_birth_structure.items():
		birthyears = sorted(birthyears)
		median_birthyear = birthyears[len(birthyears) // 2]
		median_author_age[year] = year - median_birthyear
	context['median_author_age'] = median_author_age
	author_genders = {}
	for year, genders in author_gender_structure.items():
		number_of_authors = len(genders)
		males = sum(1 for gender in genders if gender == 'm')
		male_percentage = int(males/number_of_authors*100)
		female_percentage = 100 - male_percentage
		author_genders[year] = [male_percentage, female_percentage]
	context['author_genders'] = author_genders
	read_languages = {}
	for year, languages in read_language_structure.items():
		number_of_languages = len(languages)
		danish = sum(1 for language in languages if language == 'da')
		danish_percentage = int(danish / number_of_languages * 100)
		english_percentage = 100 - danish_percentage
		read_languages[year] = [danish_percentage, english_percentage]
	context['read_languages'] = read_languages
	original_languages = []
	original_languages_years = []
	for year, languages in original_language_structure.items():
		if not year in original_languages_years:
		for lang in languages:
			if lang not in original_languages:
	original_languages_template = {}
	for language in original_languages:
		original_languages_template[language] = []
		for year in original_languages_years:
			count_of_language_in_year = sum(1 for lang in original_language_structure[year] if language == lang)
	context['original_languages_template'] = original_languages_template
	context['original_languages_years'] = original_languages_years

	author_countries = []
	author_countries_years = []
	for year, countries in author_country_structure.items():
		if not year in author_countries_years:
		for country in countries:
			if country not in author_countries:
	author_countries_template = {}
	for country in author_countries:
		author_countries_template[country] = []
		for year in author_countries_years:
			count_of_country_in_year = sum(1 for countr in author_country_structure[year] if country == countr)
	context['author_countries_template'] = author_countries_template
	context['author_countries_years'] = author_countries_years			

	median_book_age = {}
	for year, publish_years in book_age_structure.items():
		publish_years = sorted(publish_years)
		# account for no data in years
		if len(publish_years) >= 2:
			median_publish_year = publish_years[len(publish_years) // 2]
		elif len(publish_years) == 1:
			median_publish_year = publish_years[0]
			median_publish_year = 0
		median_book_age[year] = year - median_publish_year
	context['median_book_age'] = median_book_age
	return render(request, 'books/statistics.html', context)

And a template example:

	<h2>Reads per year</a>
	<canvas id="books_per_year"></canvas>

var ctx = document.getElementById('books_per_year').getContext('2d');
var myChart = new Chart(ctx, {
	type: 'bar',
	data: {
		labels: [{% for year in books_pages_per_year %}{% if not forloop.last %}{{ year.date__year }}, {% else %}{{ year.date__year }}{% endif %}{% endfor %}],
		datasets: [{
			label: 'Read',
			data: [{% for year in books_pages_per_year %}{% if not forloop.last %}{{ year.id__count }}, {% else %}{{ year.id__count }}{% endif %}{% endfor %}],
			backgroundColor: 'rgba(255, 99, 132, 0.2)',
			borderColor: 'rgba(255, 99, 132, 1)',
			borderWidth: 1
	options: {
		tooltips: {
			callbacks: {
				label: function(tooltipItem, data) {
					return data.datasets[tooltipItem.datasetIndex].label + ': ' + tooltipItem.value + ' books';
		legend: {
			display: false
		responsive: true,
		scales: {
			yAxes: [{
				ticks: {
					beginAtZero: true


Wallnots Twitterbot, version 3

Wallnots Twitter-bot finder delte artikler fra Politiken og Zetland på Twitter og deler dem med verden. Det fungerer sådan her:

# Author: Morten Helmstedt. E-mail:

import requests
from bs4 import BeautifulSoup
from datetime import datetime
from datetime import date
from datetime import timedelta
import json
import time
import random
from TwitterAPI import TwitterAPI
from nested_lookup import nested_lookup

# List to store articles to post to Twitter
articlestopost = []

# Search tweets from last 3 hours
now = datetime.utcnow()
since_hours = 3
since = now - timedelta(hours=since_hours)
since_string = since.strftime("%Y-%m-%dT%H:%M:%SZ")

# Search configuration
tweet_fields = "tweet.fields=entities"
media_fields = "media.fields=url"
max_results = "max_results=100"
start_time = "start_time=" + since_string

# Twitter API login
client_key = ''
client_secret = ''
access_token = ''
access_secret = ''
api = TwitterAPI(client_key, client_secret, access_token, access_secret)

bearer_token = ''

# Run search
query = ''

url = "{}&{}&{}&{}&{}".format(
	query, tweet_fields, media_fields, max_results, start_time
headers = {"Authorization": "Bearer {}".format(bearer_token)}
response = requests.request("GET", url, headers=headers)
json_response = response.json()

urllist = list(set(nested_lookup('expanded_url', json_response)))

# Only proces urls that were not in our last Twitter query
proceslist = []
with open("./pol_lastbatch.json", "r", encoding="utf8") as fin:
	lastbatch = list(json.load(fin))
	for url in urllist:
		if url not in lastbatch and query in url:
# Save current query to use for next time
with open("./pol_lastbatch.json", "wt", encoding="utf8") as fout:
	lastbatch = json.dumps(urllist)

# Request articles and get titles and dates and sort by dates
articlelist = []

pol_therewasanerror = False
for url in proceslist:
		if '' in url:
			start = url.find('url=')+4
			end = url.find('&', start)
			url = url[start:end]	
		if not len(url) == 37:
			url = url[:37]
		data = requests.get(url)
		result = data.text
		if '"isAccessibleForFree": "True"' not in result:
			realurl = data.history[0].headers['Location']
			if not "/article" in realurl and not ".ece" in realurl:
				start_of_unique_id = realurl.index("/art")+1
				end_of_unique_id = realurl[start_of_unique_id:].index("/")
				unique_id = realurl[start_of_unique_id:start_of_unique_id+end_of_unique_id]
			elif "/article"	in realurl and ".ece" in realurl:
				start_of_unique_id = realurl.index("/article")+1
				end_of_unique_id = realurl[start_of_unique_id:].index(".ece")
				unique_id = realurl[start_of_unique_id:start_of_unique_id+end_of_unique_id]
			articlelist.append({"id": unique_id, "url": url})
	except Exception as e:
		pol_therewasanerror = True

#If something fails, we'll process everything again next time			
if pol_therewasanerror == True:
	with open("./pol_lastbatch.json", "wt", encoding="utf8") as fout:
		urllist = []
		lastbatch = json.dumps(urllist)
# Check if article is already posted and update list of posted articles
with open("./pol_published_v2.json", "r", encoding="utf8") as fin:
	alreadypublished = list(json.load(fin))
	# File below used for to update
	for article in articlelist:
		hasbeenpublished = False
		for published_article in alreadypublished:
			if article['id'] == published_article['id']:
				hasbeenpublished = True
		if hasbeenpublished == False:
	# Save updated already published links
	with open("./pol_published_v2.json", "wt", encoding="utf8") as fout:
		alreadypublishedjson = json.dumps(alreadypublished)

# Run search
query = ''

url = "{}&{}&{}&{}&{}".format(
	query, tweet_fields, media_fields, max_results, start_time
headers = {"Authorization": "Bearer {}".format(bearer_token)}
response = requests.request("GET", url, headers=headers)
json_response = response.json()

urllist = list(set(nested_lookup('expanded_url', json_response)))

# Only proces urls that were not in our last Twitter query
proceslist = []
with open("./zet_lastbatch.json", "r", encoding="utf8") as fin:
	lastbatch = list(json.load(fin))
	for url in urllist:
		if url not in lastbatch and query in url:
# Save current query to use for next time
with open("./zet_lastbatch.json", "wt", encoding="utf8") as fout:
	lastbatch = json.dumps(urllist)

# Request articles and get titles and dates and sort by dates
articlelist = []
titlecheck = []

zet_therewasanerror = False
for url in proceslist:
		if '' in url:
			start = url.find('url=')+4
			end = url.find('&', start)
			url = url[start:end]		
		data = requests.get(url)
		result = data.text
		soup = BeautifulSoup(result, "lxml")
		title = soup.find('meta', attrs={'property':'og:title'})
		title = title['content']
		timestamp = soup.find('meta', attrs={'property':'article:published_time'})
		timestamp = timestamp['content']
		timestamp = timestamp[:timestamp.find("+")]
		dateofarticle = datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S.%f')
		if title not in titlecheck:
			articlelist.append({"title": title, "url": url, "date": dateofarticle})
	except Exception as e:
		zet_therewasanerror = True

#If something fails, we'll process everything again next time
if zet_therewasanerror == True:
	with open("./zet_lastbatch.json", "wt", encoding="utf8") as fout:
		urllist = []
		lastbatch = json.dumps(urllist)

articlelist_sorted = sorted(articlelist, key=lambda k: k['date']) 

# Check if article is already posted and update list of posted articles
with open("./zet_published.json", "r", encoding="utf8") as fin:
	alreadypublished = list(json.load(fin))
	for art in articlelist_sorted:
		title = art['title']
		if title not in alreadypublished:
	# Save updated already published links
	with open("./zet_published.json", "wt", encoding="utf8") as fout:
		alreadypublishedjson = json.dumps(alreadypublished, ensure_ascii=False)

friendlyterms = ["flink","rar","gavmild","velinformeret","intelligent","sød","afholdt","bedårende","betagende","folkekær","godhjertet","henrivende","smagfuld","tækkelig","hjertensgod","graciøs","galant","tiltalende","prægtig","kær","godartet","human","indtagende","fortryllende","nydelig","venlig","udsøgt","klog","kompetent","dygtig","ejegod","afholdt","omsorgsfuld","elskværdig","prægtig","skattet","feteret"]
enjoyterms = ["God fornøjelse!", "Nyd den!", "Enjoy!", "God læsning!", "Interessant!", "Spændende!", "Vidunderligt!", "Fantastisk!", "Velsignet!", "Glæd dig!", "Læs den!", "Godt arbejde!", "Wauv!"]

if articlestopost:
	for art in articlestopost:
		if "zetland" in art['url']:
			medium = "@ZetlandMagasin"
			medium = "@politiken"
		friendlyterm = random.choice(friendlyterms)
		enjoyterm = random.choice(enjoyterms)
		status = "En " + friendlyterm + " abonnent på " + medium + " har delt en artikel. " + enjoyterm
		twitterstatus = status + " " + art['url']
			twitterupdate = api.request('statuses/update', {'status': twitterstatus})
		except Exception as e:

Basal billedbehandling i Python

Efter at have hentet en masse flotte fotografier fra internettet, havde jeg brug for lidt grovsortering. Jeg ville fjerne de fotos, der havde for lav opløsning til, at jeg gad kigge på dem og måske på et senere tidspunkt printe dem ud.

Med Python-biblioteket Pillow kunne jeg nemt tygge mine fotos igennem.

Jeg bruger først “walk”-funktionaliteten fra biblioteket os, som lader mig lave en løkke-funktion gennem alle mapper, undermapper og filer fra et sted på min harddisk.

Derefter bruger jeg Pillow til at hente størrelsen på hver led af hvert foto og udregner arealet. Hvis et foto er lig med eller større end 3 megapixels (3 millioner pixels) beholder jeg det. Er det mindre, sletter jeg det.

Her kan du se, hvordan jeg gjorde:

# Author: Morten Helmstedt. E-mail:
'''A program to go through a directory and subdirectories and delete
image files below a certain megapixel size.'''

import os						# Used to create directories at local destination
from PIL import Image
import PIL

save_location = "C:/Downloads/"
contents = os.walk(save_location)

for root, directories, files in contents:
	for file in files:
		location = os.path.join(root,file)
		if not ".py" in file:
				image =
				area = image.size[0]*image.size[1]
				if area >= 3000000:
					print("stort", location)
					print("for lille", location)
			except PIL.UnidentifiedImageError:
				if ".jpg" in file or ".png" in file or ".jpeg" in file or ".tif" in file:
					print("deleting:", location)
			except PIL.Image.DecompressionBombError:

En crawler til mappe-visninger på nettet

Hvis du har været på internettet, er du sikkert en gang stødt på sådan ét her:

Mange webadministratorer vælger at skjule disse oversigter over filer på en webserver, som webserversoftwaren Apache kan generere automatisk.

Men jeg opdagede ved et tilfælde, at jeg kunne se, hvad fotoagenturet Magnum havde lagt op i deres WordPress-installation.

Jeg besluttede at forsøge at lave en lokal kopi, så jeg kunne kigge på flotte fotografier uden at skulle vente på downloads fra internettet.

Først forsøgte jeg med Wget, som er et lille program, der er designet til at dublere websteder lokalt. Men Wget havde problemer med at hente og tygge sig igennem de lange lister med filer. En af dem fyldte fx 36 megabytes. Det er altså rigtig mange links.

Derfor lavede jeg et lille Python-program, der kan tygge sig igennem denne type mappe- og filoversigter og downloade dem lokalt.

Her er det:

# Author: Morten Helmstedt. E-mail:
'''A program to fetch files from standard apache directory listings on the internet.
for examples of what this is.'''

import requests					# Send http requests and receive responses
from bs4 import BeautifulSoup	# Parse HTML data structures, e.g. to search for links
import os						# Used to create directories at local destination
import shutil					# Used to copy binary files from http response to local destination
import re						# Regex parser and search functions

# Terms to exclude, files with these strings in them are not downloaded
exclude = [

# Takes an url and collects all links
def request(url, save_location):
	# Print status to let user know that something is going on
	print("Requesting:", url)
	# Fetch url
	response = requests.get(url)
	# Parse response
	soup = BeautifulSoup(response.text, "lxml")
	# Search for all links and exclude certain strings and patterns from links
	urllist = [a['href'] for a in soup.find_all('a', href=True) if not '?C=' in a['href'] and not a['href'][0] == "/" and not any(term in a['href'] for term in exclude) and not"\d\d[x]\d\d",a['href'])]
	# If status code is not 200 (OK), add url to list of errors
	if not response.status_code == 200:
	# Send current url, list of links and current local save collection to scrape function
	return scrape(url, urllist, save_location)

def scrape(path, content, save_location):
	# Loop through all links
	for url in content:
		# Print status to let user know that something is going on
		print("Parsing/downloading:", path+url)
		# If there's a slash ("/") in the link, it is a directory
		if "/" in url:
			# Create local directory if it doesn't exists
			# Run request function to fetch contents of directory
			request(path+url, save_location+url)
		# If the link doesn't contain a slash, it's a file and is saved
			# Check if file already exists, e.g. has been downloaded in a prior run
			if not os.path.isfile(save_location+url):
				# If file doesn't exist, fetch it from remote location
				file = requests.get(path+url, stream=True)
				# Print status to let user know that something is going on
				print("Saving file:", save_location+url)
				# Save file to local destination
				with open(save_location+url, 'wb') as f:
					# Decodes file if received compressed from server
					file.raw.decode_content = True
					# Copies binary file to local destination
					shutil.copyfileobj(file.raw, f)

# List to collect crawling errors
errorlist = []
# Local destination, e.g. 'C:\Downloads' for Windows
save_location = "C:/Downloads/"
# Remote location, e.g.
url = ""
# Call function to start crawling
request(url, save_location)
# Print any crawling errors

Min tur i manegen med Copyright Agent

Måske har du hørt om den usympatiske faktureringsfabrik Copyright Agent, som med manglende forståelse for, hvordan websider og links virker, sender uberettigede skræmmebrevsfakturaer til private og små foreninger?

Det har fx politikeren Pelle Dragsted, politikeren Mette Abildgaard og iværksætteren Martin Thorborg i hvert fald. Måske ikke lige den slags mennesker, man forestiller sig er ude på at tage brødet ud af munden på hårdarbejdende kreative?

Det er jeg heller ikke, men alligevel har jeg også fået mit eget indtryk af Copyright Agent som en usympatisk, upersonlig og uforstående faktureringsfabrik.

Her er mit forløb med virksomheden:

Kapitel 1: Jeg har en blog kan virksomheder i kulturbranchen komme i kontakt med studerende på Institut for Kunst- og Kulturvidenskab på Københavns Universitet. Tænk: Opslagstavle.

Det er gratis for de fattige, (men kreative), virksomheder, som fx får dygtige praktikanter uden at betale en krone for det.

De virksomheder, der har kompetencerne til det, registrerer sig og lægger selv deres opslag op.

Kapitel 2: Jeg får en mail

Den 15. april 2019 modtager jeg dette fra en studentermedhjælp hos virksomheden Copyright Agent:

Vi er blevet opmærksomme på, at I sandsynligvis har krænket ophavsretten, da vi ikke kan finde belæg for anvendelsen i vores systemer. 
Som billedbureau ejer Ritzau Scanpix videresalgsretten til det pågældende billede som er markeret med en rød firkant i det vedhæftede dokument, der yderligere indeholder dokumentation for den krænkelse, vi mener har fundet sted. 
På den baggrund er Ritzau Scanpix overfor ophavsmænd, forpligtet til at søge vederlag og godtgørelse, for billeder som er publiceret uberettiget. Selv om det måske ikke har været jeres intention, så er det en krænkelse af fotografens ophavsret at publicere det uden gyldig licens eller tilladelse.
I det vedhæftede materiale er generel information, dokumentation, faktura og opgørelse af kompensation til rettighedshaver samt “Ofte stillede spørgsmål – og svar”.
Da Ritzau Scanpix oplever et stigende antal ophavsretsbrud på deres materiale ser de sig nødsaget til at finde og police deres materiale, så de også i fremtiden kan levere kvalitets materiale til deres kunder.  
Copyright Agent samarbejder med en række professionelle fotografer og førende billedbureaueromkring sikring af deres ophavsret på internettet.
I kan læse mere om Copyright Agent her:
Hvis I har spørgsmål eller dokumentation til sagen, så er I meget velkomne til at besvare denne e-mail eller kontakte os telefonisk på 70 273 272 mandag – fredag fra 9:00 – 17:00.

Oplys venligst dit sagsnummer, hvis du kontakter os telefonisk, så vi har mulighed for at hjælpe dig i den konkrete sag.

Med mailen er vedhæftet en pdf-fil, der fortæller, at jeg har brudt ophavsretslovgivningen, med en faktura på 3.437,50 kr., som jeg skal betale “inden 10 dage fra dags dato“.

Her kan du se pdf-filen – blot har jeg bortcensureret det billede, Copyright Agent har sat ind for at dokumentere min påståede krænkelse af ophavsretten:

Kapitel 3: Jeg svarer

På den fine pdf fra Copyright Agent kan jeg se, at det slet ikke er mig, der har lagt billedet op. Det er en bruger fra den sympatiske kunstbiograf Posthus Teatret.

Så jeg tænker straks: Det har som sådan ikke noget med mig at gøre. Ligesom ikke er ansvarlig, hvis jeg kommenterer på en artikel med hele teksten fra Syv år for PET, (blot de fjerner den igen når de bliver opmærksom på ophavsretsbruddet), er jeg ikke ansvarlig, når jeg i god tro går ud fra, at mine brugere selvfølgelig har lov til at publicere de fotos, de publicerer – det er jo trods alt deres egen kreative branche, der lever af ophavsretten.

Så jeg svarer fluks:

Kære Fatima

Det er en bruger på siden, der har lagt det pågældende billede op. Alle kan selv registrere sig på siden og lægge indlæg op.

Så vidt jeg kan se, er den pågældende selv ansat på – eller har tilknytning til – Posthus Teatret. I indlægget står hendes telefonnummer, så jeg synes I skal ringe til hende og spørge. Jeg fjerner hjertens gerne indlægget og/eller fotoet fra siden, såfremt jeg modtager en tro og love-erklæring fra jer på, at I ejer ophavsretten på fotoet.

Mvh Morten

Fatima svarer:

Kære Morten
Jeg vedhæfter dokumentation for, at Ritzau Scanpix har ophavsretten til billedmaterialet.  
Vi vil kontakte Posthus Teatret. Tak for hjælpen.

Jeg sletter billedet fra og skriver:

Kære Fatima

Jeg har slettet billedet fra serveren.

Og Fatima svarer:

Kære Morten

Det er noteret, at billedet er fjernet, hvilket vi takker for.

Og jeg tror at alt er godt. Men det er det ikke…

Kapitel 4: Rykkeren

Den 14. maj 2019 modtager jeg en ny mail fra Fatima:


Da betalingsfristen er overskredet, sender vi en lille påmindelse. Ved manglende tilbagemelding i løbet ad ugen, vil vi sende rykkere i sagerne med de oprindelige beløb. 

Der må være tale om en fejl – totalt utjekket at lave den slags fejl når Copyright Agents gesjæft er at afpresse borgere med juridisk sprog.

Jeg skriver samme dag:

Kære Fatima

Vi har ordnet sagen – og derfor regner jeg bestemt med, at du frafalder kravet.

Og jeg tror at alt er godt. Men det er det ikke…

Kapitel 5: Inkassovarsel

Den 12. juni modtager jeg denne mail fra – gæt selv – Fatima:

R2, krænkelse af ophavsretten – inkassovarsel


Vi har tidligere fremsendt krav om kompensation for krænkelse af vores klients ophavsret. Vi har fortsat ikke registreret jeres betaling og fremsender hermed vedhæftede rykker i sagen.

Copyright Agent samarbejder med en række profesionelle fotografer og førende billedbureauer omkring sikring af deres ophavsret på internettet.

I kan læse mere om copyright Agent her:

Hvis I har spørgsmål eller dokumentation til sagen, så er I meget velkomne til at besvare denne e-mail eller kontakte os telefonisk.

Det er ikke så tit, jeg får inkassovarsler, så her tænker jeg at Copyright Agents adfærd er stærkt ubehagelig. Jeg sender fluks hele 4 mails tilbage:

Kære Fatima

Vær sød at ringe til mig ved lejlighed på 25 80 16 54. Vi har allerede afsluttet sagen, men du bliver ved med at kontakte mig.

I øvrigt har jeg også besvaret alle dine tidligere e-mails.

Og for nu at sige det helt klart: Jeg har ikke tænkt mig at betale for en mulig krænkelse af ophavsretten, som det ikke er mig der har begået.

Kære Fatima

Vedhæftet er dokumentation for, hvem der – hvis der er foretaget en krænkelse af ophavsretten – har foretaget den, ved at uploade det pågældende billede til den server, hvor ligger. Du kan rette eventuelle krav til den person. 

Du bedes bekræfte, at du frafalder kravet.

Og endelig falder tiøren hos Copyright Agent, som tydeligvis benytter sig af helt inkompetente automatiseringsløsninger og stakkels, fattige studentermedhjælpere i deres hæmningsløse higen efter profit:

Kære Morten

Jeg beklager den tilsendte rykker, hvilket blev sendt ved en fejl. 
Vi tager sagen videre med Posthus Teatret. 

Jeg håber at det er endt godt for Posthus Teatret. Jeg ved stadig ikke, om de havde lov at benytte billedet af deres biograf, men at Copyright Agent går efter en fattig kulturinstitution for (efter eget udsagn) at hjælpe fattige kreative, viser tydeligt at Copyright Agent kun gør deres fejlbehæftede, inkompetente arbejde for pengenes skyld.

Kapitel 6: Hvorfor dele historien?

Så hvorfor offentliggøre min runde i managen med Copyright Agent?

Så andre, som i den lignende historie med advokatfirmaet Njord, der uberettiget sendte fakturaer på downloadede film til hvem som helst, kan læse om Copyright Agents forretningsmodel og -metoder til skræk, advarsel og måske hjælp, hvis de skulle være så uheldige at modtage en mail fra virksomheden.

1440 virksomheder overvåger dig hvis du siger ja til cookies på

Se de 1440 virksomheder

I dag besøgte jeg og blev mødt af:

Jeg besluttede mig at undersøge nærmere.

Politikens cookiepolitik gemmer sig i

Det er en json-fil på 3 MB! Jeg skrev et lille Python-program til at tygge den igennem:

import json
with open("privacy-manager-view.json", "r", encoding="utf8") as politiken:
	politiken = json.load(politiken)
	partners = []
	for vendor in politiken['vendors']:
		name = vendor['name']
		url = vendor['policyUrl']
		purposes = []
		if 'consentCategories' in vendor:
			for consent in vendor['consentCategories']:
				if consent['type'] == "IAB_PURPOSE":
		if 'iabSpecialPurposes' in vendor:
			for purpose in vendor['iabSpecialPurposes']:
		if 'iabFeatures' in vendor:
			for purpose in vendor['iabFeatures']:
		if 'iabSpecialFeatures' in vendor:
			for purpose in vendor['iabSpecialFeatures']:
		partners.append([name, url, purposes])
	partners.sort(key=lambda x:x[0].lower())
	number_of_partners = len(partners)
	linklist = "<html lang='da'><body><h1>"
	linklist += "Her er de " + str(number_of_partners) + " virksomheder, som overvåger dig, hvis du siger ja tak til alle cookies på (d. 11. december 2020)</h1><table>"
	for partner in partners:
			linklist += "<tr><td><a href='" + partner[1] + "'>" + partner[0] + "</a></td></tr>\n"
			linklist += "<tr><td>" + partner[0] + "</td></tr>\n"
	linklist += "</table></body></html>"
	with open("linklist.html", "wt", encoding="utf8") as fout:

Og her er listen:

Nu kan du spille kortspillet Krig – online! har jeg netop offentliggjort årets julespil nummer 1: Krig!

Ej, jeg havde allerede prøvet at simulere Krig, men synes det kunne være spændende at få logikkerne til at hænge sammen med interaktivitet (dog højst begrænset) og logikker for rent faktisk at vise spillet.

Nu har jeg gjort et forsøg og jeg har kommenteret en masse i koden, så den forhåbentlig er nem at følge med i.

Her er fra Django:

from django.shortcuts import render
import random			# Used to shuffle decks
import base64			# Used for obfuscation and deobfuscation functions
from math import ceil 	# Used to round up

# Create decks function - not a view
def new_deck(context):
	# Create card values and list of cards in each colour
	card_values = range(2,15)
	spades = [str(i) + "S" for i in card_values]
	clubs = [str(i) + "C" for i in card_values]
	diamonds = [str(i) + "D" for i in card_values]
	hearts = [str(i) + "H" for i in card_values]
	# Combine colours to deck
	deck = spades + clubs + diamonds + hearts
	# Shuffle deck		
	# Divide deck between two players and convert to commaseparated string
	player_a_deck = ",".join(deck[0:26])
	player_b_deck = ",".join(deck[26:52])
	# Obfuscate decks to make cheating marginally harder using the obfuscate function
	# production variable toggles this behavior because it's very time consuming to debug
	# if obfuscation is on
	production = True
	if production == True:
		player_a_deck = obfuscate(player_a_deck)
		player_b_deck = obfuscate(player_b_deck)
	# Add the two decks to context
	context['player_a_deck_form'] = player_a_deck
	context['player_b_deck_form'] = player_b_deck
	# Set index to 0 to only turn one card for first round of game
	context['index'] = 0
	return context

# Obfuscate by converting to base64 encoding - not a view
def obfuscate(deck):
	return base64.b64encode(deck.encode()).decode()

# Deobfuscate by converting from base64 encoding to string - not a view
def deobfuscate(deck):
	return base64.b64decode(deck.encode()).decode()

# Logic to create a list of which cards should be hidden or shown to player - not a view
def show_hide_cards(cards_on_table, index):
	counter = 0
	cards_on_table_show_hide = []
	for card in cards_on_table:
		# First card should always be shown
		if counter == 0:
			cards_on_table_show_hide.append([card, True])
		# If the card number is divisible by 4 it is the turn card in a war
		elif counter % 4 == 0:
			cards_on_table_show_hide.append([card, True])
		# If the card number equals the index value, one or both players does not
		# have enough cards for a full war so the last card should be turned
		elif counter == index:
			cards_on_table_show_hide.append([card, True])
			cards_on_table_show_hide.append([card, False])
		counter += 1
	return cards_on_table_show_hide

# Page view
def index(request):
	# Empty context variable to add to
	context = {}
	# Production variable to toggle obfuscation
	production = True
	# First visit, game has not been started
	if not request.method == 'POST':
		# Create a deck using the new_deck function
	# Game has started
		# Current game status is used in template to know whether game has been
		# started or not, or has ended
		game_status = "Going on"
		# Get submitted decks from user submitted POST request
		player_a_deck = request.POST.get('player_a_deck')
		player_b_deck = request.POST.get('player_b_deck')
		# Deobfuscate submitted decks using the deobfuscate function
		if production == True:
			player_a_deck = deobfuscate(player_a_deck)
			player_b_deck = deobfuscate(player_b_deck)
		# Convert decks to lists
		player_a_deck = player_a_deck.split(",")
		player_b_deck = player_b_deck.split(",")

		# Get submitted index value in order to know which cards to compare
		# The index is used in case of war to determine which cards to compare
		# and what cards to show to player
		index = int(request.POST.get('index'))
		context['current_index'] = index
		# In order to display cards in correct order in case of war for player_b
		# a number of slices are prepared and added to context as strings in a list.
		# number_of_slices is rounded up in case index is not divisible by 4 (endgame logic)
		number_of_slices = ceil(index/4)	
		slices = []
		# Only needed if number of slices is above 0
		if number_of_slices:
			start = 1
			end = 5
			for slice in range(number_of_slices):
				start +=4
				end += 4
		context['slices'] = slices
		# In order to display cards to player using a loop, the deck is sliced
		# by the index value plus 1. # If index is 0, 1 card should be shown.
		# If index is 4 because of war, 5 cards should be shown... and so on.
		a_cards_on_table = player_a_deck[:index+1]
		b_cards_on_table = player_b_deck[:index+1]
		# Cards on table is run through function to decide which cards to show face up/face down
		# to player and added to context.
		context['a_cards_on_table'] = show_hide_cards(a_cards_on_table, index)
		context['b_cards_on_table'] = show_hide_cards(b_cards_on_table, index)
		# Length of cards "on the table" is calculated in order to calculate remaining cards in player decks.
		# The value for player a is shown to the players and is also used for template card display logic.
		a_cards_on_table_length = len(a_cards_on_table)
		b_cards_on_table_length = len(b_cards_on_table)
		# Calculate number of cards in decks
		a_number_of_cards = len(player_a_deck)
		b_number_of_cards = len(player_b_deck)

		# Add remaining cards in deck to context to show to players
		a_remaining_in_deck = a_number_of_cards - a_cards_on_table_length
		b_remaining_in_deck = b_number_of_cards - b_cards_on_table_length
		context['a_remaining_in_deck'] = a_remaining_in_deck
		context['b_remaining_in_deck'] = b_remaining_in_deck
		### GAME LOGIC ###
		# Check if both players have decks large enough to compare
		if a_number_of_cards > index and b_number_of_cards > index:
			# Convert first card in decks to integer value in order to compare
			player_a_card = int(player_a_deck[index][:len(player_a_deck[index])-1])
			player_b_card = int(player_b_deck[index][:len(player_b_deck[index])-1])

			# Player a has the largest card
			if player_a_card > player_b_card:
				# Add cards in play to end of player a deck and delete them from beginning
				# of player a and player b decks
				del player_a_deck[:index+1]
				del player_b_deck[:index+1]
				# If a play is decided, index is set to 0
				index = 0
				context['message'] = "Du vandt runden!"
			# Player b has the largest card
			elif player_a_card < player_b_card:
				# Cards are added to deck in different order from player a to deck in order
				# to avoid game risk of going on forever
				del player_a_deck[:index+1]
				del player_b_deck[:index+1]
				# If a play is decided, index is set to 0
				index = 0
				context['message'] = "Du tabte runden!"
			# Cards must be equal and war is on
				# In case of war normally four cards are added to the index, but
				# In order to accomodate a case of end-game war, there are special cases
				# if either player doesn't quite have enough cards for a full 4-card-turn war
				if a_number_of_cards >= index + 4 <= b_number_of_cards:
					index += 4
				# Since the if statement two levels up already checks that number of cards is larger
				# than the index value, an else with no criteria is enough to decide how many cards
				# each player has left to turn and add the smallest number to the index
					# Calculate the difference between number of cards and index for each player.
					# The smallest of the two differences is added to index to decide how many cards to use for war.
					# One is subtracted for the card already on the table
					a_difference = a_number_of_cards - index
					b_difference = b_number_of_cards - index
					index += min(a_difference, b_difference) - 1
					# Edge case: If war on last remaining card for either player, 1 is added to index to end the game
					# by getting the index above the number of cards in the deck of the player(s) with no cards left
					if a_remaining_in_deck == 0 or b_remaining_in_deck == 0:
						index += 1
				# Messages are different for single, double, trippel wars and anything above.
				# Since the index can be upped by less than four, less than or equal is used to
				# decide which kind of war is on.
				if index <= 4:
					context['message'] = "Krig!"
				elif index <= 8:
					context['message'] = "Dobbeltkrig!"
				elif index <= 12:
					context['message'] = "Trippelkrig!"
					context['message'] = "Multikrig!"
		# Calculate length of decks after game logic has run
		player_a_deck_length = len(player_a_deck)
		player_b_deck_length = len(player_b_deck)
		# Compare lengths of decks to decide if someone has won. The number of cards on table for
		# next turn of cards is always at least one more than the index (index 0, 1 card, index 4,
		# 5 cards). There are three possible outcomes:
		# 1) Equal game: Both players are unable to turn and have equal sized decks (very, very rare!)
		# 2) Player a is unable to play and has a smaller deck than b (if both players are unable to turn, largest deck wins)
		# 3) Same as 2) for player b
		if player_a_deck_length <= index and player_b_deck_length <= index and player_a_deck_length == player_b_deck_length:
			context['message'] = "Spillet blev uafgjort. Hvor tit sker det lige?"
			game_status = "Over"
		elif player_a_deck_length <= index and player_a_deck_length < player_b_deck_length:
			context['message'] = "Du tabte spillet!"
			game_status = "Over"			
		elif player_b_deck_length <= index and player_b_deck_length < player_a_deck_length:
			context['message'] = "Du vandt spillet!"	
			game_status = "Over"			

		# Add size of decks after play to context to decide whether to show decks to player
		context['after_deck_a'] = player_a_deck
		context['after_deck_b'] = player_b_deck
		# Add game status to context
		context['game_status'] = game_status
		# Convert decks back to strings
		player_a_deck = ",".join(player_a_deck)
		player_b_deck = ",".join(player_b_deck)
		# Obfuscate decks using obfuscate function
		if production == True:		
			player_a_deck = obfuscate(player_a_deck)
			player_b_deck = obfuscate(player_b_deck)
		# Context for form
		context['player_a_deck_form'] = player_a_deck
		context['player_b_deck_form'] = player_b_deck
		context['index'] = index
		# If game is over, create a new deck to add to form for new game
		if game_status == "Over":
	return render(request, 'krig/index.html', context)

Og her er skabelonen index.html:

{% load static %}
{% spaceless %}
<!doctype html>
<html lang="da">
		<meta name="description" content="Spil det populære, vanedannende kortspil krig mod computeren - online!">
		<meta charset="utf-8">
		<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
		<link rel="stylesheet" href="{% static "krig/style.css" %}">
		<link rel="apple-touch-icon" sizes="180x180" href="/apple-touch-icon.png">
		<link rel="icon" type="image/png" sizes="32x32" href="/favicon-32x32.png">
		<link rel="icon" type="image/png" sizes="16x16" href="/favicon-16x16.png">
		<link rel="manifest" href="/site.webmanifest">
		<link rel="mask-icon" href="/safari-pinned-tab.svg" color="#5bbad5">
		<meta name="msapplication-TileColor" content="#ffc40d">
		<meta name="theme-color" content="#ffffff">
		{% comment %}Most of stylesheet is loaded externally, but logic to size images in case of war is kept in template{% endcomment %}
		{% if current_index > 0 %}
			img {
				width: 22%;
				display: inline;
		{% endif %}
		{% comment %}Status message of current round or game is displayed{% endcomment %}
		<p class="status">
			{{ message }}
		{% comment %}Page is divided in two-column grid. Each column is aligned towards vertical center of page{% endcomment %}
		<div class="grid">
			{% comment %}Player a ("You") column{% endcomment %}
			<div class="item text-right">

				{% comment %}If any cards are left to turn, show number, if no cards are left, write no cards left{% endcomment %}
				<p class="cardsleft">
					{% if a_remaining_in_deck > 0 %}
						{{ a_remaining_in_deck }} kort tilbage i bunken
					{% elif a_remaining_in_deck == 0 %}
						Ingen kort tilbage!
					{% endif %}

				{% comment %}Back of card (deck) is shown if cards are left in deck or game has not begun{% endcomment %}
				{% if a_remaining_in_deck > 0 or not game_status %}
					<img src="{% static 'krig/back_r.svg' %}">
				{% endif %}

				{% comment %}Loop to show player's turned cards.{% endcomment %}
				{% for card in a_cards_on_table %}
					{% if card.1 == True %}
						<img src="{% static 'krig/'|add:card.0|add:'.svg' %}"><br>
					{% else %}
						<img src="{% static 'krig/back_r.svg' %}">
					{% endif %}
				{% endfor %}

			{% comment %}Player b ("Computer") column{% endcomment %}
			<div class="item text-left">
				{% comment %}If any cards are left to turn, show number, if no cards are left, write no cards left{% endcomment %}
				<p class="cardsleft">
					{% if b_remaining_in_deck > 0 %}
						{{ b_remaining_in_deck }} kort tilbage i bunken
					{% elif b_remaining_in_deck == 0 %}
						Ingen kort tilbage!
					{% endif %}

				{% comment %}
					The order of the deck and the first turned card is different for player b who plays on the right side.
					Therefore if there is a first card in player b's cards on table that card is shown.
				{% endcomment %}
				{% if b_cards_on_table.0 %}
					<img src="{% static 'krig/'|add:b_cards_on_table.0.0|add:'.svg' %}">
				{% endif %}

				{% comment %}If b has cards left in deck or game has not started, show back of deck{% endcomment %}
				{% if b_remaining_in_deck > 0 or not game_status %}
						<img src="{% static 'krig/back_r.svg' %}">
				{% endif %}
				{% comment %}
					Due to the order of player b's shown cards being different than for player a, this loop to show cards
					in case of war is a little different from player a's.
					The slices variable contains pairs of values saved as strings that the Django template filter |slice can
					understand, e.g. "1:5". These are looped through so that only parts of b_cards_on_table corresponding to
					the slice is looped through for each single, double, etc. war. The loop through b_cards_on_table is reversed
					because the card being turned is shown left of the hidden cards in the war.
				{% endcomment %}
				{% for slice_cut in slices %}
					{% for card in b_cards_on_table|slice:slice_cut reversed %}
						{% if card.1 == True %}
							<img src="{% static 'krig/'|add:card.0|add:'.svg' %}">
						{% else %}
							<img src="{% static 'krig/back_r.svg' %}">
						{% endif %}
					{% endfor %}<br>
				{% endfor %}

		{% comment %}
			This form is used for user input with the text in the button depending on whether user is on:
			1) Starting page: User can start a game
			2) In an ongoing game: User can turn next card
			3) In a game that has ended: User can start a new game
		{% endcomment %}
		<form class="next" action="{% url 'krig_index' %}" method="post">
			{% csrf_token %}
			<input name="player_a_deck" type="hidden" value="{{ player_a_deck_form }}">
			<input name="player_b_deck" type="hidden" value="{{ player_b_deck_form }}">
			<input name="index" type="hidden" value="{{ index }}">
			<button type="submit">{% if not game_status %}Start spillet{% elif game_status == "Going on" %}Vend næste kort{% elif game_status == "Over" %}Start nyt spil{% endif %}</button>
{% endspaceless %}

God fornøjelse!

How I failed to make LinkedIn fix their broken international domain URL parser

In Denmark it is possible to register domains with funny characters such as æ, ø and å. And we do. One prominent example is our national portal for booking Covid-19 tests at https://coronaprø Wikipedia calls these beasts Internationalised domain names, so they must indeed exist.

Recently I quit my job (have a new one now, luckily) and found myself making posts on a social network known as LinkedIn to improve my prospects.

One of these posts was about about a hobby project I made called, an ad free wish registry. The big player on the Danish wish registry market (what a market: it seems every novice web developer in Denmark has launched one of these) is https://ø

What I wanted to let my network know was something like:

“I have launched – a gratis, ad and surveilance free alternative to evil wish list giant ø”

– Morten Helmstedt, job seeker

Alas, LinkedIn’s URL parser breaks in many ways when trying to express your career news and feelings through internationalised domain names.

How do thee fail? Let me count the ways.

When making a post on LinkedIn with an URL, LinkedIn will try to:

  • Create a preview of the first URL in the post
  • Create a short link for all URLs in the post containing a path (e.g., not for top level domains and subdomains without a path. It will generally look like and point to something like

Here are the bugs I noticed in action:

Posting like a sane person

Trying to post an internationalised domain name like any sane person would. Post preview fails to load. The link in the post itself works as expected, though.

Posting like a LinkedIn person

Whois’ing the “real” domain name and posting it like no true Dane would. Post preview succeeds.

Things go from bad to worse when trying to post an internationalised domain name with a path such as https://ø

If I just post that URL, I get a post like this:

LinkedIn shortens the link to make it more readable and to be able to track our smallest actions on the world wide web.

What happens when I click the link is this:


An error! (Invalid redirect)

My browser (Firefox) tries to GET the URL and is redirected to

A 301 redirect in action!

And then:

The submitted URL is stored, but the location should probably be or maybe even Who knows? It’s complicated. LinkedIn engineers should definitely look into this!

How I tried to fix this mess

Well, I contacted LinkedIn on Twitter (WHAT!), tried e-mailing their security e-mail (no reply, of course, but only e-mail I could find) and got in touch with a very understanding Member Support Consultant named Vegard who tried his/her best:

Pretty impressive response after describing the problem.

If our URL parser doesn’t work, just change your URL

But then the engineering team told me that I should just stop posting internationalised domain names to LinkedIn:

True for Chrome, not for Firefox, not from a usability perspective

I tried to have Vegard tell the engineers at LinkedIn to read up on internationalised domain names, but no such luck:

The sorry end.


An aside:

As another hobby project, I created, a very simple short link generator (like wish registries, it seems every aspiring web developer in Denmark has made one of these). Using Django‘s built in URLField I can validate, store and correctly redirect internationalised domain names with hardly any work at all on my part.

If only tools like that were available for the engineers at LinkedIn to use for their URL parsing and shortening…

Serveradministration (for begynder)

Jeg kører mine Django-baserede hjemmesider fra en lillebitte Virtuel Privat Server (VPS) hos DigitalOcean. Det koster $6,25 om måneden. Hvis du er interesseret i at prøve det, kan du bruge dette link: [link fjernet]. Når du bruger linket får du lov at bruge for $100 inden for 60 dage. Hvis du senere bruger 25 rigtige $, får jeg også $25 til min konto.

Nå: I nat fejlede et script, jeg bruger til at tage backups af mine databaser, og jeg forstod ikke rigtig hvorfor. Det var noget med, jeg ikke fik lov at logge på med SSH. Så kiggede jeg på min servers ressourceforbrug:

I løbet af natten var CPU-belastningen gået fra ca. 3% til omkring 15%. Av.

Jeg undersøgte først de kørende processer med Linux-kommandoen top, men jeg kunne ikke rigtig se noget problem:

Efter lidt googling fandt jeg ud af at kigge på mine systemlogs med kommandoen journalctl:

Av. En masse forskellige IP-adresser var åbenbart i gang med at forsøge at logge ind med SSH på min server.

Jeg gjorde min firewall mere restriktiv ved at åbne for de par IP-blokke (fx min hjemmeinternetforbindelse), som jeg ved skal have adgang. Alt andet indgående traffik til port 22 (som SSH bruger), lukkede jeg for.


Min lille server har det godt igen – og jeg lærte lidt om fejlsøgning på og overvågning af Linux-servere.

Sådan fik jeg fat i domænenavnet til min nye ønskeseddelservice

For ikke så lang tid siden, skrev jeg om min nye ønskeseddelservice, et overvågnings- og reklamefri alternativ til fx Ønskeskyen.

Jeg synes at min service var god nok til at fortjene sit eget domænenavn på internettet, men alle de gode navne, jeg kunne komme i tanke om, var optaget.

Ét af de domæner, jeg havde kig på, var Et sigende, relativt kort, globaliseringsparat domæne. På det tidspunkt så forsiden af ca. sådan her ud:

Det må man ikke!

I Danmark er det ikke tilladt at oppebære registreringen af et domænenavn udelukkende med henblik på videresalg.

Derfor startede jeg en sag hos Klagenævnet for Domænenavne. Det koster 160 kr. for private, men man får pengene tilbage, hvis man vinder. Jeg dokumenterede min nye service, den nuværende brug af og så skrev jeg ellers:

Jeg har lavet en gratis, ukommerciel og overvågningsfri ønskeseddelservice som alternativ til kommercielle services – der har været nogle sager i medierne for nylig om problemer med overholdelse af persondatalovgivningen for disse sider.

Siden er klar til at gå i luften, men mangler et sigende domæne. Jeg har vedhæftet et skærmbillede af den side, jeg har udviklet, som pt. ligger på (bilag 1).

Jeg ønsker at benytte domænet (engelsk for ønskeseddel) som et sigende domæne for sidens indhold. er dog registreret. Registranten er ikke angivet som anonym i whois, men er en service, der tilbyder anonym registrering af
domænenavne (Anonymize, Inc., Jeg har derfor ikke været i stand til at kontakte ejeren af domænet. har ikke noget indhold, bortset fra en salgsside, der udbyder domænet til €2.500 (bilag 2). Salgssiden er drevet af som er en virksomhed, der – så vidt jeg kan se – udelukkende beskæftiger sig med salg af allerede registrerede domæner. På siden udbydes også til €2.500 og står listet med en privat sælger (“private seller”,, bilag 3).


Jeg mener, at domænet bør overdrages til mig, da jeg, da jeg allerede har udviklet en service/hjemmeside til at oprette ønskesedler, og dermed har interesse i et sigende domænenavn til min side.

Den nuværende anvendelse af domænet er i strid med god domæneskik og Lov om internetdomæner §25, stk. 2, der siger at “Registranter må ikke registrere og opretholde registreringer af domænenavne alene med videresalg eller udlejning for øje.” Den nuværende registrering oppebæres udelukkende med videresalg af domænet for øje, hvilket fremgår tydeligt da man mødes af en salgsside for en virksomhed, der udelukkende beskæftiger sig med salg af allerede registrerede domæner, når man besøger Domænet har været registreret i en lang periode
muligvis af flere forskellige registranter – men har så vidt jeg kan se aldrig været anvendt til indhold med nogen form for relation til domænenavnet.

Derfor mener jeg at domænet bør overdrages til mig.

Jeg indsendte klagen d. 3. juli og den 11. september kom der svar til mig. Personen bag den tidligere registrering havde ikke svaret i sagen.

Jeg overtager brugsretten til

Kategoriseret som blandet