Btt ürettikleriniz graph oluşturma

Arkadaşlar :books: Ürettikleriniz başlığı altında yazılarınızı grapha dönüştürme fikri var. Eğer onay gelirse farklılık açısından güzel olacağını düşünüyorum. Backlinkleri destekleyecek. Örnek video aşağıda.

1 Like

Bunu hangi programla yapacağız ki ?

Sol tarafta panel var Konuların altında bir link açılabilir https://zettelkasten.uncomfyhalomacro.pl/ örnek site

Tamam da bunu yaptıktan sonra nasıl erişim sağlayacağız

Brave kullanıyorsanız bug olabilir tıklanmıyorsa shields özelliğini kapatarak önizlemeye bakabilirsiniz yada firefox kullanabilirsiniz

Siteye girer gibi linke tıklayacaksınız

Edit: https://btt-beta.vercel.app/ bu linkten değerlendirebilirsiniz düzenleme yapacağım arkadaşlar 3D görünümü seçebilirsiniz.

Edit: Şimdilik vazgeçildi

Script aşağıda kullanılan uygulamalar, emacs. Eklenti: org-roam, org-roam-ui


from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup
import os
import requests
import time
import subprocess

base_url = 'https://btt.community/raw/'
markdown_directory = 'btt_downloads'

# Set up a headless Chrome browser
chrome_options = Options()
chrome_options.add_argument('--headless')  # Run Chrome in headless mode (without GUI)
driver = webdriver.Chrome(options=chrome_options)

# Navigate to the URL
url = 'https://btt.community/c/urettikleriniz/22'
driver.get(url)

# Wait for lazy-loaded content to appear (adjust timeout as needed)
wait = WebDriverWait(driver, 10)
wait.until(EC.presence_of_element_located((By.CLASS_NAME, 'raw-link')))

# Get the initial document height
last_height = driver.execute_script("return document.body.scrollHeight")

# Function to check if the page has scrolled and return the new document height
def scroll():
    driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
    time.sleep(2)  # Adjust sleep time as needed
    new_height = driver.execute_script("return document.body.scrollHeight")
    return new_height

def remove_md_files(directory):
    for filename in os.listdir(directory):
        if filename.endswith(".md"):
            file_path = os.path.join(directory, filename)
            os.remove(file_path)

def convert_to_org(input_file, output_file):
    command = f"pandoc {input_file} --from markdown --to org --no-highlight -o {output_file}"
    subprocess.run(command, shell=True)
# Scroll until the document height no longer increases
while True:
    new_height = scroll()
    if new_height == last_height:
        break
    last_height = new_height

# Get the final page source after lazy loading
page_source = driver.page_source

# Close the browser
driver.quit()

# Parse the HTML content of the page using BeautifulSoup
soup = BeautifulSoup(page_source, 'html.parser')

# Find all elements with the raw link class
raw_link_elements = soup.find_all(class_='raw-link')

# Check if any elements were found
if raw_link_elements:
    # Create an array to store the raw links
    raw_links = []

    # Loop through each element and extract the raw link
    for raw_link_element in raw_link_elements:
        raw_links.append(raw_link_element['href'])
        numeric_ids = [int(link.split('/')[-1]) for link in raw_links]

    # Download files based on numeric IDs
    base_url = 'https://btt.community/raw/'
    download_folder = 'btt_downloads'
    os.makedirs(download_folder, exist_ok=True)

    for numeric_id in numeric_ids:
        url = f'{base_url}{numeric_id}'
        response = requests.get(url)

        if response.status_code == 200:
            # Save the downloaded content to a file in the download folder
            with open(os.path.join(download_folder, f'{numeric_id}_file.md'), 'wb') as file:
                file.write(response.content)
            print(f'Downloaded file for ID {numeric_id}')
        else:
            print(f'Failed to download file for ID {numeric_id}')

    for filename in os.listdir(markdown_directory):
        if filename.endswith(".md"):
            input_file = os.path.join(markdown_directory, filename)
            output_file = os.path.join(markdown_directory, f"{os.path.splitext(filename)[0]}.org")
            convert_to_org(input_file, output_file)

    remove_md_files(markdown_directory)

    print('Download process completed.')
else:
    print("No elements with the raw link class found on the page.")

Bu konu son yanıttan 30 gün sonra otomatik olarak kapatıldı. Yeni yanıtlara artık izin verilmiyor.